Large language models (LLMs) trained on vast amounts of text are the super-intelligent engines that power generative AI chatbots like Google's ChatGPT and Gemini, and Opera just became the first web browser to support native integration of LLMs.
You may have read how to install LLM locally: this means that the AI model is stored on your computer, so nothing needs to be sent to the cloud. It requires a pretty decent combination of hardware to work, but it's better from a privacy perspective - no one can snoop on your prompts or use your conversations for AI training.
We've already seen Opera introduce various artificial intelligence features. This has now been extended to local LL.M.s, where you have over 150 models to choose from.
Local LLM Opera
Before you jump straight into a local Opera LLM course, there are a few things to keep in mind. First of all, this is still experimental, so you might notice a bug or two. Second, you'll need some free storage - some LLMs have less than 2GB, but others on the list have over 40GB.
The larger LLM will give you better answers, but will also take longer to download and run. To some extent, the performance of the model will depend on the hardware setup you're running it on, so if you're on an older computer you may have to wait a while before something can be restored (again, this is still Beta test).
These local LL.M.s are a mixture of models published by well-known companies (Google, Meta, Intel, Microsoft) and those created by a combination of researchers and developers. They are free to install and use, partly because you use your own computer to power LLM, so there are no running costs for the team that developed it.
Note that some of these models are suitable for specific tasks (such as coding) and may not give you the general knowledge answers you expect from ChatGPT, Copilot, or Gemini. Each piece comes with instructions; read through them before installing any of these models so you know what you're getting.
Test it yourself
At the time of writing, this feature is only available in early beta versions of Opera before being rolled out more widely. If you want to give it a try, you'll need to download and set up the developer version of Opera One. Once done, click the Aria button (the little A symbol) to open the left panel and follow the instructions to configure the built-in AI bot (you'll need to create or log in to a free Opera account).
When Aria is ready, you should see the "Select local AI model" box at the top: click this button and select " Go to settings" and you will see a list of available LLMs and some information about them. Select any LLM to see a list of versions (and their file sizes) and a download button to install it locally.
If you wish, you can set up multiple LLMs in Opera by selecting the one you want to use in each chat via the drop-down menu at the top of the Aria window. If you do not select on-premises LLM, the default (cloud-based) Aria chatbot is used. You can start a new chat at any time by clicking the big + (plus sign) button in the upper right corner of the chat window.
Using these local LL.M.s is like using anything running in the cloud: let them generate texts on any topic and in any style, ask questions about life, the universe and everything, and get tips on anything you like. Since these models don't have cloud access, they can't find anything relatively new or popular on the web.