AI Hands-On: A group of notebooks and other files which can help you learn AI from scratch.

General

The second is Citrini’s “The 2028 Global Intelligence Crisis,” a financial variation of the take-off scenario where AI ends up doing everything that doomer pundits and industry leaders have been warning of but instead of killing everyone it stops at killing the $13 trillion mortgage market (because of course, that’s the most dramatic thing that could happen if you’re a financial analyst). I read it until I reached a point where they give three examples of SaaS firms that could be affected—“Monday.com, Zapier, and Asana”— because when I asked Claude 4.6 Opus about the “SaaSpocalypse” two weeks earlier, it gave those three exact examples to illustrate its point. It might be a coincidence, but stochastic parrots are usually more parrot than stochastic.

The third one, and a personal favorite, is Sam Kriss’s Harper’s “Child’s play,” a retelling of Kriss’s experience among some of the most idiosyncratic personalities of the San Francisco tech scene. This is the last one chronologically and, in literary merit and arguably historical value, the best of the three. Kriss, unlike, I presume, Shumer and Citrini, is a veteran in the sport of disguising fiction as non-fiction—worthy heir to the Borgesian style, although perhaps born at the worst time possible now that everyone seems to be shamelessly copycatting his schtick—which is apparent from the fact that, among the three texts, his is the only one that feels real. His thesis is something I imagine everyone agrees with: obsessing over being “high agency” and living your life as a means to an end is, ultimately, a relentless run-up for a date with death.

Automation with ...

Suggestions

10 Agent Projects: a list of 10 AI agent projects you can try this weekend. They go from basic single agents to more advanced multi-agent systems.

History

Criticism

Tools

Reading

Definitions

Assistants

Expert Systems / Rules Engines

Fuzzy Logic

Java:

Neural Networks

Natural Language

Large Language Models (LLMs)

An advanced artificial intelligence (AI) system, built on deep learning and transformer architectures, that is pre-trained on massive amounts of text data to understand, process, and generate human-like language. LLMs learn to predict the next word in a sequence, enabling them to perform tasks like text generation, translation, summarization, and responding to complex queries, though they are not perfect oracles and can generate incorrect information or exhibit bias.

Recursive Language Models

Small Language Models (SLMs)

An AI model designed to handle specific tasks, using fewer parameters and less computational power than a large language model (LLM). This efficiency makes SLMs faster to train, more accessible, and suitable for deployment on devices with limited resources or for performing specialized functions, such as data extraction from documents, language translation, or specific conversational agents. In terms of size, SLM parameters range from a few million to a few billion, as opposed to LLMs with hundreds of billions or even trillions of parameters. Parameters are internal variables, such as weights and biases, that a model learns during training. These parameters influence how a machine learning model behaves and performs.

Retrieval Augmented Generation (RAG)

Coding Assistants

Generative AI

Science

Machine Learning

Semantic Entity Resolution (Knowledge Graphs) (?)

AI Agent Knowledge Base


Detail Pages:

Last modified 15 April 2026