Earlier today Querium announced the general release of Project::smarter, its new instructional platform for generative AI text completions. Smarter is a no-code platform aimed at non-technical learners. It’s unique plugin technology provides a novel approach to training LLM foundational models with proprietary data while also protecting its origins and helping to avoid the kinds of LLM hallucinations that are typical of current state of the art in generative AI. It is ideal for high school thru graduate university learning experiences. It runs at scale and is available as a hosted SaaS, on-premise or as a managed cloud installation.

Contact Kent Fuka for pricing and sales information.
Smarter is LLM vendor agnostic and provides enterprise-class features for audit, logging, data privacy, cost controls and internal billing. It works with most popular LLM’s and can be integrated into workflows. It is the simplest, fastest way to create sophisticated text completion solutions that leverage LLMs from multiple vendors.
Lawrence McDaniel, lead product engineer for Smarter says, “We borrowed heavily from Kubernetes’ use of yaml-based manifests as a simple and powerful strategy for managing the complete life cycle of complex AI resources.”
Smarter’s innovative plugin technology yields prompt results that are objectively superior to common extension strategies like Retrieval Augmented Generation (RAG), Fine Tuning and Embeddings while also not only being more secure but also orders of magnitude more cost effective.
Smarter Plugins are built on a Large Language Model (LLM) API edge feature generally referred to as, “Function Calling“, that a growing population of LLM’s include in their APIs. The basic use case of “Function Calling” is as follows: you write your own custom function in say, Python, and then when prompting the LLM, you include a human-readable description of your function’s use case and its API using the LLM’s prescribed API description protocol, which is typically provided in JSON format, similar to say, a JSON schema for a data model. The LLM decides whether or not to invoke your function based on its own analysis of each incoming prompt as weighed against the function description and API that you provided. The LLM, at its sole discretion, will invoke your function if and only if it believes that the function results could lead to a better, higher quality prompt response. Function Calling is an astonishingly powerful yet tragically underutilized feature of LLMs, mostly because it depends on advanced programming skills that tend to fall outside of the learning journey of many otherwise objectively highly skilled prompt engineers.
Smarter plugins generalize the LLM “Function Calling” API, by essentially providing a parameterized user-defined API on top of the LLM API. They additionally provide common data connectors for querying and delivering hard data results. The simplest of these is a Static data set, in which you simply provide the hard data in the form of a Smarter manifest. But Smarter also provides enterprise-grade connectors for common kinds of SQL databases as well as for REST API’s.