As you may have seen, Openai has just released two new AI-Models-GPT-OSS-20B and GPT-OSS-12B-as are the first open weight models from the company since GPT-2.
These two models – one are more compact and the other much larger – is defined by the fact that you can run them locally. They work on your desktop PC or laptop – right on the device without the need to go online or press the power of the cloud provided your hardware is powerful enough.
So you can download either the 20B version or if your PC is a powerful machine, 120b spin and play around with it on your computer, check how it works (on text-to-text mode) and how the model thinks (its whole reasoning is divided into steps). And in fact, you can fine -tune and build on these open models, although safety ranks and censorship measures will of course be in place.
But what kind of hardware do you need to run these AI models? In this article, I examine the PC specification requirements for both the GPT-OSS-20B-the more reluctant model that packs 21 billion parameters and gpt-oss-1120b, offering 117 billion parameters. The latter is designed for the use of data center, but it runs on a high-end PC, while the GPT-OSS-20B is the model designed specifically for consumer units.
In fact, when I announced these new AI models, Sam Altman referred at 20b and worked on not only run-of-the-Mill-laptop, but also smartphones-but it’s enough to say it’s a ambitious Claim that I will come back to later.
These models can be downloaded from Hugging Face (here’s GPT-OSS-20B, and here’s GPT-OSS-1220B) under the Apache 2.0 license, or for just curious, there’s an online demo you can check (no download is needed).
The smaller GPT-OSS-20B model
Minimum RAM required: 16 GB
The official documentation from Openai simply establishes a necessary amount of RAM to these AI models, which in the case of this more compact GPT-OS-20B effort is 16 GB.
This means you can run GPT-OSS-20B on any laptop or PC that has 16 GB of system memory (or 16 GB video RAM or a combination of both). However, it is much a case of the more, the Merrier – or faster, rather. The model may be able to chug with the mere minimum of 16 GB, and ideally you will have a little more on the tap.
As for CPUs, AMD recommends the use of a Ryzen AI 300 series CPU paired with 32 GB of memory (and half of it, 16 GB, set to variable graphics memory). For GPU, AMD recommends any RX 7000 or 9000 model that has 16 GB of memory-but these are not hard and fast requirements as such.
Really, the key factor is simply to have enough memory – the said 16 GB allocation and preferably have all this on your GPU. This allows all the work to take place on the graphics card without being slowed down by having to read some of it for the PC’s system memory. Although the so-called blend of experts or MOE, Design Openai is used here, helps to minimize such a performance feature, fortunately.
Anecdotian, to choose an example picked from Reddit, the GPT-OS-20B runs fine on a MacBook Pro M3 with 18 GB.

The larger GPT-OSS-12B model
Ram necessary: 80 GB
It is the same total deal with the beef gpt-oss-1120B model except as you might guess you need very More memory. Officially, it means 80 GB, even if you don’t remember you don’t have to have all that hit your graphics card. That said, this large AI model is really designed to use data center on a GPU with 80 GB of memory on board.
However, the RAM allocation can be divided. So you can run GPT-OSS-12B on a computer with 64 GB system memory and a 24 GB graphics card (an NVIDIA RTX 3090 TI, for example according to this Redditor) that makes a total of 88 GB RAM overall.
AMD’s recommendation in this case, CPU-wise, is for its top-of-the-rank Ryzen AI Max+ 395 processor combined with the 128 GB system RAM (and 96 GB of the one assigned as variable graphics memory).
In other words, you look at a seriously advanced workstation laptop or desktop (maybe with multiple GPUs) for GPT-OSS-12B. However, you may be able to get away with a little less than the set 80 GB of memory that goes by some anecdotal reports – though I would not knock it in any way.

How to run these models on your PC
If you assume that you meet the system requirements described above, you can run one of these new GPT-OSS releases on Ollama, which is Openai’s platform that chooses to use these models.
Go here to grab Oilama for your PC (Windows, Mac or Linux) – Click the button to download the executable and when it’s finished downloading, double -click the executable file to run it and click Install.
Then run the following two commands in Ollama to get and then run the desired model. In the example below we run GPT-OSS-20B, but if you want the larger model, just replace 20B with 120b.
ollama pull gpt-oss:20b
ollama run gpt-oss:20b
If you prefer another option rather than Ollama, you can use LM Studio instead using the following command. Again, you can change 20b to 120b or vice versa, as needed:
lms get openai/gpt-oss-20b
Windows 11 (or 10) Users can exercise the possibility of Windows AI foundry (HAT TIP to the rim).
In this case, install Foundry Local – there is a warning here, and that is that this is still in preview – check this guide for the full instructions on what to do. Also note that right now you need an NVIDIA graphics card with 16GB of VRAM on board (although other GPUs, such as AMD Radeon Models, will eventually be supported -remember that this is still a preview release).
In addition, macOS support comes “soon”, we are told.

What about smartphones?
As noticed at first, while Sam Altman said the smaller AI model is running on a phone, this statement pushes it.
True, Qualcomm released a press release (as stained by Android Authority) about GPT-OS-20B running on devices with a Snapdragon chip, but this is more about laptops-copilot+ PCs that have Snapdragon X Silicon snarers than smartphone CPUs.
Running the GPT-OS-20B is not a realistic suggestion for today’s phones, although it may be possible in the technical sense (provided your phone has 16 GB+ RAM). Still, I doubt the results would be impressive.
However, we are not far away from getting these kinds of models that run properly on mobiles, and this will certainly be in the cards for the almost enough future.



