docs/content/whats-new.md
+++ disableToc = false title = "News" weight = 7 url = '/basics/news/' icon = "newspaper" +++
Release notes have been now moved completely over Github releases.
You can see the release notes here.
This release brings a major overhaul in some backends.
Breaking/important changes:
llama-stable renamed to llama-ggml {{< pr "1287" >}}New:
Due to the python dependencies size of images grew in size.
If you still want to use smaller images without python dependencies, you can use the corresponding images tags ending with -core.
Full changelog: https://github.com/mudler/LocalAI/releases/tag/v2.0.0
This release is a preparation before v2 - the efforts now will be to refactor, polish and add new backends. Follow up on: https://github.com/mudler/LocalAI/issues/1126
This release now brings the llama-cpp backend which is a c++ backend tied to llama.cpp. It follows more closely and tracks recent versions of llama.cpp. It is not feature compatible with the current llama backend but plans are to sunset the current llama backend in favor of this one. This one will be probably be the latest release containing the older llama backend written in go and c++. The major improvement with this change is that there are less layers that could be expose to potential bugs - and as well it ease out maintenance as well.
This release bring support for AMD thanks to @65a . See more details in {{< pr "1100" >}}
Thanks to @jespino now the local-ai binary has more subcommands allowing to manage the gallery or try out directly inferencing, check it out!
This is an exciting LocalAI release! Besides bug-fixes and enhancements this release brings the new backend to a whole new level by extending support to vllm and vall-e-x for audio generation!
Check out the documentation for vllm here and Vall-E-X here
Hey everyone, Ettore here, I'm so happy to share this release out - while this summer is hot apparently doesn't stop LocalAI development :)
This release brings a lot of new features, bugfixes and updates! Also a big shout out to the community, this was a great release!
From this release the llama backend supports only gguf files (see {{< pr "943" >}}). LocalAI however still supports ggml files. We ship a version of llama.cpp before that change in a separate backend, named llama-stable to allow still loading ggml files. If you were specifying the llama backend manually to load ggml files from this release you should use llama-stable instead, or do not specify a backend at all (LocalAI will automatically handle this).
The [Diffusers]({{%relref "features/image-generation" %}}) backend got now various enhancements, including support to generate images from images, longer prompts, and support for more kernels schedulers. See the [Diffusers]({{%relref "features/image-generation" %}}) documentation for more information.
Now it's possible to load lora adapters for llama.cpp. See {{< pr "955" >}} for more information.
It is now possible for single-devices with one GPU to specify --single-active-backend to allow only one backend active at the time {{< pr "925" >}}.
Thanks to the continous community efforts (another cool contribution from {{< github "dave-gray101" >}} ) now it's possible to shutdown a backend programmatically via the API. There is an ongoing effort in the community to better handling of resources. See also the π₯Roadmap.
Thanks to the community efforts now we have a new how-to website with various examples on how to use LocalAI. This is a great starting point for new users! We are currently working on improving it, a huge shout out to {{< github "lunamidori5" >}} from the community for the impressive efforts on this!
Did you know that we have now few cool bots in our Discord? come check them out! We also have an instance of LocalAGI ready to help you out!
Join our Discord community! our vibrant community is growing fast, and we are always happy to help! https://discord.gg/uJAeKSAGDy
The full changelog is available here.
This is release brings four(!) new additional backends to LocalAI: [πΆ Bark]({{%relref "features/text-to-audio#bark" %}}), π¦ [AutoGPTQ]({{%relref "features/text-generation#autogptq" %}}), [𧨠Diffusers]({{%relref "features/image-generation" %}}), π¦ [exllama]({{%relref "features/text-generation#exllama" %}}) and a lot of improvements!
[Bark]({{%relref "features/text-to-audio#bark" %}}) is a text-prompted generative audio model - it combines GPT techniques to generate Audio from text. It is a great addition to LocalAI, and it's available in the container images by default.
It can also generate music, see the example: lion.webm
[AutoGPTQ]({{%relref "features/text-generation#autogptq" %}}) is an easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
It is targeted mainly for GPU usage only. Check out the [ documentation]({{%relref "features/text-generation" %}}) for usage.
[Exllama]({{%relref "features/text-generation#exllama" %}}) is a "A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights". It is a faster alternative to run LLaMA models on GPU.Check out the [Exllama documentation]({{%relref "features/text-generation#exllama" %}}) for usage.
[Diffusers]({{%relref "features/image-generation#diffusers" %}}) is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Currently it is experimental, and supports generation only of images so you might encounter some issues on models which weren't tested yet. Check out the [Diffusers documentation]({{%relref "features/image-generation" %}}) for usage.
Thanks to the community contributions now it's possible to specify a list of API keys that can be used to gate API requests.
API Keys can be specified with the API_KEY environment variable as a comma-separated list of keys.
Now by default the model-gallery repositories are configured in the container images
LocalAGI is a simple agent that uses LocalAI functions to have a full locally runnable assistant (with no API keys needed).
See it here in action planning a trip for San Francisco!
The full changelog is available here.
This release focuses mostly on bugfixing and updates, with just a couple of new features:
Most notably, this release brings important fixes for CUDA (and not only):
{{% notice note %}}
From this release [OpenAI functions]({{%relref "features/openai-functions" %}}) are available in the llama backend. The llama-grammar has been deprecated. See also [OpenAI functions]({{%relref "features/openai-functions" %}}).
{{% /notice %}}
The full changelog is available here
{{% notice note %}}
From this release to use the OpenAI functions you need to use the llama-grammar backend. It has been added a llama backend for tracking llama.cpp master and llama-grammar for the grammar functionalities that have not been merged yet upstream. See also [OpenAI functions]({{%relref "features/openai-functions" %}}). Until the feature is merged we will have two llama backends.
{{% /notice %}}
In this release is now possible to specify to LocalAI external gRPC backends that can be used for inferencing {{< pr "778" >}}. It is now possible to write internal backends in any language, and a huggingface-embeddings backend is now available in the container image to be used with https://github.com/UKPLab/sentence-transformers. See also [Embeddings]({{%relref "features/embeddings" %}}).
Thanks to the community effort now LocalAI supports templating for LLaMa2! more at: {{< pr "782" >}} until we update the model gallery with LLaMa2 models!
Progress has been made to support LocalAI with langchain. See: https://github.com/langchain-ai/langchain/pull/8134
@ldotlopez in {{< pr "721" >}}@mudler in {{< pr "726" >}}gRPC-based backends by @mudler in {{< pr "743" >}}ggllm.cpp by @mudler in {{< pr "743" >}}This allows to run OpenAI functions as described in the OpenAI blog post and documentation: https://openai.com/blog/function-calling-and-other-api-updates.
This is a video of running the same example, locally with LocalAI:
And here when it actually picks to reply to the user instead of using functions!
Note: functions are supported only with llama.cpp-compatible models.
A full example is available here: https://github.com/mudler/LocalAI-examples/tree/main/functions
This is an internal refactor which is not user-facing, however, it allows to ease out maintenance and addition of new backends to LocalAI!
falcon supportNow Falcon 7b and 40b models compatible with https://github.com/cmp-nct/ggllm.cpp are supported as well.
The former, ggml-based backend has been renamed to falcon-ggml.
From this release the default behavior of images has changed. Compilation is not triggered on start automatically, to recompile local-ai from scratch on start and switch back to the old behavior, you can set REBUILD=true in the environment variables. Rebuilding can be necessary if your CPU and/or architecture is old and the pre-compiled binaries are not compatible with your platform. See the [build section]({{%relref "installation/build" %}}) for more information.
go-piper by {{< github "mudler" >}} in {{< pr "649" >}} See [API endpoints]({{%relref "features/text-to-audio" %}}) in our documentation.stablediffusion): quay.io/go-skynet/local-ai:v1.20.0quay.io/go-skynet/local-ai:v1.20.0-ffmpegquay.io/go-skynet/local-ai:v1.20.0-gpu-nvidia-cuda11-ffmpegquay.io/go-skynet/local-ai:v1.20.0-gpu-nvidia-cuda12-ffmpegUpdates to llama.cpp, go-transformers, gpt4all.cpp and rwkv.cpp.
The NUMA option was enabled by {{< github "mudler" >}} in {{< pr "684" >}}, along with many new parameters (mmap,mmlock, ..). See [advanced]({{%relref "advanced" %}}) for the full list of parameters.
In this release there is support for gallery repositories. These are repositories that contain models, and can be used to install models. The default gallery which contains only freely licensed models is in Github: https://github.com/go-skynet/model-gallery, but you can use your own gallery by setting the GALLERIES environment variable. An automatic index of huggingface models is available as well.
For example, now you can start LocalAI with the following environment variable to use both galleries:
GALLERIES=[{"name":"model-gallery", "url":"github:go-skynet/model-gallery/index.yaml"}, {"url": "github:ci-robbot/localai-huggingface-zoo/index.yaml","name":"huggingface"}]
And in runtime you can install a model from huggingface now with:
curl http://localhost:8000/models/apply -H "Content-Type: application/json" -d '{ "id": "huggingface@thebloke__open-llama-7b-open-instruct-ggml__open-llama-7b-open-instruct.ggmlv3.q4_0.bin" }'
or a tts voice with:
curl http://localhost:8080/models/apply -H "Content-Type: application/json" -d '{ "id": "model-gallery@voice-en-us-kathleen-low" }'
See also [models]({{%relref "features/model-gallery" %}}) for a complete documentation.
Now LocalAI uses piper and go-piper to generate audio from text. This is an experimental feature, and it requires GO_TAGS=tts to be set during build. It is enabled by default in the pre-built container images.
To setup audio models, you can use the new galleries, or setup the models manually as described in [the API section of the documentation]({{%relref "features/text-to-audio" %}}).
You can check the full changelog in Github
Container images:
stablediffusion): quay.io/go-skynet/local-ai:v1.19.2quay.io/go-skynet/local-ai:v1.19.2-ffmpegquay.io/go-skynet/local-ai:v1.19.2-gpu-nvidia-cuda11-ffmpegquay.io/go-skynet/local-ai:v1.19.2-gpu-nvidia-cuda12-ffmpegThis LocalAI release is plenty of new features, bugfixes and updates! Thanks to the community for the help, this was a great community release!
We now support a vast variety of models, while being backward compatible with prior quantization formats, this new release allows still to load older formats and new k-quants!
falcon-based model families (7b) ( mudler )/v1/completions endpoint ( samm81 )2048x2048 images size with esrgan! ( mudler )llama models ( mudler )REBUILD=falseTwo new projects offer now direct integration with LocalAI!
Support for OpenCL has been added while building from sources.
You can now build LocalAI from source with BUILD_TYPE=clblas to have an OpenCL build. See also the [build section]({{%relref "getting-started/build#Acceleration" %}}).
For instructions on how to install OpenCL/CLBlast see here.
rwkv.cpp has been updated to the new ggml format commit.
Now it's possible to automatically download pre-configured models before starting the API.
Start local-ai with the PRELOAD_MODELS containing a list of models from the gallery, for instance to install gpt4all-j as gpt-3.5-turbo:
PRELOAD_MODELS=[{"url": "github:go-skynet/model-gallery/gpt4all-j.yaml", "name": "gpt-3.5-turbo"}]
llama.cpp models now can also automatically save the prompt cache state as well by specifying in the model YAML configuration file:
prompt_cache_path: "alpaca-cache"
prompt_cache_all: true
See also the [advanced section]({{%relref "advanced" %}}).
go-gpt2.cpp backend got renamed to go-ggml-transformers.cpp updated including https://github.com/ggerganov/llama.cpp/pull/1508 which breaks compatibility with older models. This impacts RedPajama, GptNeoX, MPT(not gpt4all-mpt), Dolly, GPT2 and Starcoder based models. Binary releases available, various fixes, including {{< pr "341" >}} ./models/apply endpoint, llama.cpp backend updated including https://github.com/ggerganov/llama.cpp/pull/1508 which breaks compatibility with older models. gpt4all is still compatible with the old format.gpt4all and llama backend, consolidated CUDA support ( {{< pr "310" >}} thanks to @bubthegreat and @Thireus ), preliminar support for [installing models via API]({{%relref "advanced#" %}}).llama.cpp-compatible models and image generation ({{< pr "272" >}}).llama.cpp backend and Stable diffusion CPU image generation ({{< pr "272" >}}) in master.Now LocalAI can generate images too:
| mode=0 | mode=1 (winograd/sgemm) |
|---|---|
rwkv backend patch releasellama.cpp bindings: This update includes a breaking change in the model files ( https://github.com/ggerganov/llama.cpp/pull/1405 ) - old models should still work with the gpt4all-llama backend.gpt4all bindings. Added support for GPTNeox (experimental), RedPajama (experimental), Starcoder (experimental), Replit (experimental), MosaicML MPT. Also now embeddings endpoint supports tokens arrays. See the langchain-chroma example! Note - this update does NOT include https://github.com/ggerganov/llama.cpp/pull/1405 which makes models incompatible.bert.cpp ( {{< pr "222" >}} )llama.cpp backend ( {{< pr "207" >}} )rwkv.cpp models ( {{< pr "158" >}} ) and for /edits endpointllama.cpp backends ( {{< pr "152" >}} )