• 5 Posts
  • 482 Comments
Joined 2 years ago
cake
Cake day: June 19th, 2023

help-circle




  • Apple TV’s hardware is just so much more capable than other platforms that they’ve just been coasting along the last several generations of ”Apple TV 4K”. Our over 7 years old Gen 1 is still super capable and the only reason we picked up Gen 3 is so we can get the Thread radio in a centralized location. As an Apple user, I’m extremely glad there’s going to be a new competitor in the space, which will hopefully push Apple further along the innovation path.


  • Ask it for a second opinion on medical conditions.

    Sounds insane but they are leaps and bounds better than blindly Googling and self prescribe every condition there is under the sun when the symptoms only vaguely match.

    Once the LLM helps you narrow in on a couple of possible conditions based on the symptoms, then you can dig deeper into those specific ones, learn more about them, and have a slightly more informed conversation with your medical practitioner.

    They’re not a replacement for your actual doctor, but they can help you learn and have better discussions with your actual doctor.


  • If you can serve content locally without tunnel (ie no CGNAT or port block by ISP), you can configure your server to respond only to cloudflare IP range and your intranet IP range; slap on the Cloudflare origin cert for your domain, and trust it for local traffic; enable orange cloud; and tada. Access from anywhere without VPN; externally encrypted between user <> cloudflare and cloudflare <> your service; internally encrypted between user <> service; and only internally, or someone via cloudflare can access it. You can still put the zero trust SSO on your subdomain so Cloudflare authenticates all users before proxying the actual request.








  • If memory serves, 175B parameters is for the GPT3 model, not even the 3.5 model that caught the world by surprise; and they have not disclosed parameter space for GPT4, 4o, and o1 yet. If memory also serves, 3 was primarily English, and had only a relatively small set of words (I think 50K or something to that effect) it was considering as next token candidates. Now that it is able to work in multiple languages and multi modal, the parameter space must be much much larger.

    The amount of things it can do now is incredible, but our perceived incremental improvements on LLM will probably slow down (due to the pace fitting to the predicted lines in log space)… until the next big thing (neural nets > expert systems > deep learning > LLM > ???). Such an exciting time we’re in!

    Edit: found it. Roughly 50K tokens for input output embedding, in GPT3. 3Blue1Brown has a really good explanation here for anyone interested: https://youtu.be/wjZofJX0v4M





  • It’s not even that.

    By and large, most industry standard softwares are only available on Windows and macOS. Take word processing for example. It doesn’t matter if there are open source alternatives that gets it 95% of the way there. Companies by and large would not want to run the risk of that last 5% (1%, 0.01% doesn’t matter) creating a situation where there’s misunderstanding with another business entity. Companies will by and large continue to purchase and expect their employees to use these standard softwares. People will by and large continue to train themselves to use these softwares so they have employable skills so they can put food on the table.

    No one cares about how easy or hard it is to install something. IT (or local brick and mortar computer retailer) takes care of all that. Whether or not it is compatible with consistently making money / putting food on the table is way more important.

    Until we have Microsoft Office for Linux; Adobe Creative Suite for Linux; Autodeks AutoCAD for Linux; etc etc. not even the janky “Microsoft Office for Mac” little cousin implementation but proper actual first party for Linux releases, it is unlikely we’ll see competitive level of Linux desktop adoption.