Uncategorized (yet)

General

History

Criticism

Tools

Tutorials

Reading

Definitions

Expert Systems/Rules Engines

Fuzzy Logic

Java:

Natural Language Processing

Natural Language Programming

Large Language Models (LLMs)

An advanced artificial intelligence (AI) system, built on deep learning and transformer architectures, that is pre-trained on massive amounts of text data to understand, process, and generate human-like language. LLMs learn to predict the next word in a sequence, enabling them to perform tasks like text generation, translation, summarization, and responding to complex queries, though they are not perfect oracles and can generate incorrect information or exhibit bias.

Small Language Models (SLMs)

An AI model designed to handle specific tasks, using fewer parameters and less computational power than a large language model (LLM). This efficiency makes SLMs faster to train, more accessible, and suitable for deployment on devices with limited resources or for performing specialized functions, such as data extraction from documents, language translation, or specific conversational agents. In terms of size, SLM parameters range from a few million to a few billion, as opposed to LLMs with hundreds of billions or even trillions of parameters. Parameters are internal variables, such as weights and biases, that a model learns during training. These parameters influence how a machine learning model behaves and performs.

Retrieval Augmented Generation (RAG)

Coding Assistants

Generative AI

Machine Learning

Semantic Entity Resolution (Knowledge Graphs) (?)


Detail Pages:

Last modified 17 October 2025