Hey there! This week, I dove into what GenZ software engineers really think, tackled DIY weather stations, explored teaching AI to write like us with LangChain, and considered the crazy charts of a new AI influencer. Plus, I checked out cool AI products, discussed deploying models, and even discovered why we get bored of life’s best things. Oh, and epic sandwiches—yum!
- What do GenZ software engineers really think?: Young software engineers discuss values, what frustrates them about working in tech, and what they really think of older colleagues. Responses to our exclusive survey.
- A DIY weather display with dedicated outdoor sensor station: Weather stations are popular projects in the maker community because they’re useful and usually quite affordable to construct. But most that we see are really weather information displays that gather data through the internet from stations in the region. That data is fairly accurate, but there can be minor differences due to microclimate zones. So, Wilson Malone […]
- Using LangChain to teach an LLM to write like yourself: These documents are larger than what can be digested by many LLM’s. ChatGPT 3.5 has only a 4096 token window, one article alone is approximately ~2000 tokens. To widen the context window, I would have to build a pipeline from where the LLM can retrieve these documents. Splitting the documents into chunks is also a good strategy as it will help the LLM in generating this content, I would also be deleting some of the unnecessary text in each document.
- A new A.I. influencer is producing some of the most criminal charts I’ve ever seen: Who is “Leopold Aschenbrenner”? PLUS: Bill Ackman, Vivek Ramaswamy, and the anti-woke finance grift.
- Building AI products — Benedict Evans: How do we build mass-market products that change the world around a technology that gets things ‘wrong’? What does wrong mean, and how is that useful?
- It’s NOT About The Content: The business of media is the provisioning of consumer contact.
- The Many Ways to Deploy a Model: There are many ways to deploy models and perform inference. Here, we share our decision rubric for model deployments using LLM inference as an example.
- The state of AI in early 2024: Gen AI adoption spikes and starts to generate value: As generative AI adoption accelerates, survey respondents report measurable benefits and increased mitigation of the risk of inaccuracy. A small group of high performers lead the way.
- Payments 101 for a Developer: An open source payments switch written in Rust to make payments fast, reliable and affordable – Payments 101 for a Developer · juspay/hyperswitch Wiki
- Introducing Apple’s On-Device and Server Foundation Models: At the 2024 Worldwide Developers Conference, we introduced Apple Intelligence, a personal intelligence system integrated deeply into…
- Private Cloud Compute: A new frontier for AI privacy in the cloud: Secure and private AI processing in the cloud poses a formidable new challenge. To support advanced features of Apple Intelligence with larger foundation models, we created Private Cloud Compute (PCC), a groundbreaking cloud intelligence system designed specifically for private AI processing. Built with custom Apple silicon and a hardened operating system, Private Cloud Compute extends the industry-leading security and privacy of Apple devices into the cloud, making sure that personal user data sent to PCC isn’t accessible to anyone other than the user — not even to Apple. We believe Private Cloud Compute is the most advanced security architecture ever deployed for cloud AI compute at scale.
- OpenAI and Apple announce partnership: to integrate ChatGPT into Apple experiences.
- Why We Get Bored of the Best Things in Life—and How to Fight It: Our minds are wired to habituate to any situation.
- We Challenged Top Chefs to Make Epic Sandwiches. Here’s What We Learned.: It turns out livermush is actually pretty good, pickled okra is just as good as pickles, and potato chips are the ticket to textural success