Hey! Our primary objective for now is to provide the open source community with cool and useful tooling - we found closed source to be much more popular because of better tooling!
Thanks! How do you earn or keep yourself afloat? I really like what you guys are doing. And similar orgs. I am personally doing the same, full-time. But I am worried when I will run out of personal savings.
I've been wondering this since they started it, mostly as a concern they stay afloat. Since Daniel does the work of ten, it seems like their value:cost ratio is world-class at the very least.
With the studio release, it seems to like they could be on the path to just bootstrapping a unicorn or a 10x corn or whatever that's called, which is super interesting. Anyway, his refusal to go into details reassures me, sounds like things are fine, and they're shipping. Vai com dios
Daniel is a very impressive guy. Well within the realm of “fund the people not the idea” that YC seems to do. Got a few bucks from them and probably earning from collaborations etc. Odds of them not figuring out a business model seem slim.
Companies have no idea what they are doing, they know they need it, they know they want it, engineers want it, they don’t have it in their ecosystem so this is a perfect opportunity to come in with a professional services play. We got you on inference training/running, your models, all that, just focus on your business. Pair that with huggingface’s storage and it’s a win/win.
From the README at https://github.com/unslothai/unsloth: "Unsloth uses a dual-licensing model of Apache 2.0 and AGPL-3.0. The core Unsloth package remains licensed under Apache 2.0, while certain optional components, such as the Unsloth Studio UI are licensed under AGPL-3.0."
What do you mean by custom LMStudio license? Your employer requires reviews of proprietary EULAs or do you try to get a custom licensing deal from LMStudio?
They state further down that they're working on non-Nvidia support. Looking forward to it, since I'm pretty heavily invested in suffering on AMD (ROCm sucks, but everything else about AMD is worth it to me.).
Thank you for the follow up! Big fan of your models here, thanks for everything you are doing!
Works fine on MacOS now (chat only).
On Ubuntu 24.04 with two GPU's (3090+3070), it appears that Llama.cpp sometimes uses the CPU and not GPU. This is judging from the tk/s and CPU load for identical models run with US-studio vs. just Llama.cpp (bleeding edge).
LM Studio user here. Unsloth looks great, wanted to check it out, but there is no app file to download and install? Sorry I'm not familiar with the command line.
Uv helps you up though. Use a pyproject.toml and uv sync. Everything will be put into the venv only, nothing spread across the whole system.
The pyproject.toml can even handles build env for you, so you no longer need a setup.sh that installs 10 tool in specific order with specific flag to produce working environment. A single uv sync, and the job is done.
Plus the result is reproducible, so if this time uv sync work, then it also work next time.
Highly recommend if you are still on pip.
Note: Take a example that I used to install unsloth with rocm setup that based on unreleased git version dependencies and graphic card specific build flag, all of them can be handled with one command 'uv sync'. This will require a big pile of shell script if doing another way. https://github.com/unslothai/unsloth/issues/4280#issuecommen...
I recommend installing uv first, then you can install any Python code you want inside a virtual environment to keep it isolated from the rest of the system.
Yep uv pip install unsloth works as well - we probably should have just made that the default - in fact Unsloth makes its own venv using UV if you have it dynamically
I think the website should probably mention those installation preset in unsloth pyproject.toml though. The website instruct you to install dependencies separately. But it turns out there are dedicated preset that install specific rocm/cuda/xpu version in the project.
Ah yes, came to say something similar, Python dependencies are an absolute nightmare, even with uv it feels like there's always a battle to make other peoples Python apps install.
Update: It looks like it doesn't work with the current Python version, you might have to downgrade to Python 3.13 (however even then I still get `error: unexpected argument '--torch-backend' found`)
You would be surprised - we're the 4th largest independent distributor of LLMs in the world - and nearly every Fortune 500 company has utilized either our RL fine-tuning package or used our quants and models - we for example collab directly with large labs to release models with bug fixes.
You would be surprised! Nearly every Fortune 500 company has utilized either our RL fine-tuning package or used our quants and models - the UI was primarily a culmination of pain points folks had when doing either training or inference!
We're complimentary to LM Studio - they have a great tool as well!
What does "normal AMD support" mean here? I was completely unable to get it working on my Ryzen AI 9700 XT. I had to munge the versions in the requirements to get libraries compatible with recent enough ROCm, and it didn't go well at all. My last attempt was a couple weeks before studio was announced.
Actually the opposite haha- more than 50% of our audience comes from large organizations eg Meta, NASA, the UN, Walmart, Spotify, AWS, Google, and the list goes on!
However, since I already have pi working with llama.cpp server from a docker container, I did a quick experiment to compare three code bases:
https://gist.github.com/ontouchstart/7483c12efa3c3d3a49e38c2...
https://gist.github.com/ontouchstart/217fe2b8103a5c0bfaee1e9...
Very interesting.
Will do it again next week if I can get unsloth studio working.
curl -LsSf https://astral.sh/uv/install.sh | sh
uv venv unsloth_studio --python 3.13
source unsloth_studio/bin/activate
uv pip install unsloth==2026.3.7 --torch-backend=auto
unsloth studio setup
unsloth studio -H 0.0.0.0 -p 8888
https://gist.github.com/ontouchstart/532312fcba59aec3ce7f6aa...
Here is the error message on my machine:
https://gist.github.com/ontouchstart/86ca3cbd8b6b61fa0aeec75...
It seems we might need more instructions on how to set up python (via uv) in vanilla MacOS.
``` ../scipy/meson.build:274:9: ERROR: Dependency lookup for OpenBLAS with method 'pkgconfig' failed: Pkg-config for machine host machine not found. Giving up. ```
Too much work.
We have much much in the pipeline!!
With the studio release, it seems to like they could be on the path to just bootstrapping a unicorn or a 10x corn or whatever that's called, which is super interesting. Anyway, his refusal to go into details reassures me, sounds like things are fine, and they're shipping. Vai com dios
https://www.ycombinator.com/companies/unsloth-ai
Companies have no idea what they are doing, they know they need it, they know they want it, engineers want it, they don’t have it in their ecosystem so this is a perfect opportunity to come in with a professional services play. We got you on inference training/running, your models, all that, just focus on your business. Pair that with huggingface’s storage and it’s a win/win.
Is there an alternative, tutorial, or project you'd recommend that would help me do supervised fine tuning (SFT) with the metal stack / macOS?
curl -LsSf https://astral.sh/uv/install.sh | sh
uv venv unsloth_studio --python 3.13
source unsloth_studio/bin/activate
uv pip install unsloth --torch-backend=auto
unsloth studio setup
unsloth studio -H 0.0.0.0 -p 8888
Works fine on MacOS now (chat only).
On Ubuntu 24.04 with two GPU's (3090+3070), it appears that Llama.cpp sometimes uses the CPU and not GPU. This is judging from the tk/s and CPU load for identical models run with US-studio vs. just Llama.cpp (bleeding edge).
This needs to go on homebrew or be a zip file with an app for manual download.
We come from Python land mainly so packaging and distribution is all very new to us - homebrew will definitely be next!
The pyproject.toml can even handles build env for you, so you no longer need a setup.sh that installs 10 tool in specific order with specific flag to produce working environment. A single uv sync, and the job is done.
Plus the result is reproducible, so if this time uv sync work, then it also work next time.
Highly recommend if you are still on pip.
Note: Take a example that I used to install unsloth with rocm setup that based on unreleased git version dependencies and graphic card specific build flag, all of them can be handled with one command 'uv sync'. This will require a big pile of shell script if doing another way. https://github.com/unslothai/unsloth/issues/4280#issuecommen...
https://pipx.pypa.io/stable/installation/
uv init
uv add unsloth
uv run main.py % or whatever
Update: It looks like it doesn't work with the current Python version, you might have to downgrade to Python 3.13 (however even then I still get `error: unexpected argument '--torch-backend' found`)
Also, never saw any Unsloth related software in production to this day. Feels strongly like a non-essential tool for hobby LLM wizards.
Is it like, for AI hobbyists? I.e. I have a 4090 at home and want to fine-tune models?
Is it a competitor to LMStudio?
We're complimentary to LM Studio - they have a great tool as well!
Happy to see unsloth making it even easier for people like me to get going with fine tuning. Not that I am unable to I'm just lazy.
Fine tuning with a UI is definitely targeted towards hobbyists. Sadly I'll have to wait for AMD ROCm support.