Last Weeks Bytes #1
This weeks collection of findings including Llama 3.2 Vision on Ollama, Meta LayerSkip, and Vinxi.
Shaking things up a bit. I came across a number of interesting pieces of content last week relating to AI, web dev, and more. This post takes a high level view on my thoughts around each of these things I found.
Llama 3.2 Vision on Ollama
Ollama is my tool of choice for running LLMs locally. Specifically I use Ollama as my API for interacting with various LLMs whenever I’m working on any side projects for fun. It’s so easy to use and it honestly just works™.
This week Ollama released support for the Llama 3.2 Vision models. I went ahead and tested this out myself. This basically allows for anyone to pull these models down locally and give the modal an image than start asking it questions about the image.
I asked it to tell me what is in this image?
I ran this test on my Macbook Pro Apple M1 Max and it took probably about a minute to read the image and return the initial results. All subsequent questions after that were pretty quick as you’d expect.
As part of this research, I’m realizing more and more that I need a dedicated server with Ollama on it to speed things up.
Meta LayerSkip
This one I came across on the machine learning subreddit last week. Meta AI released a new way to speed up inference in LLMs. They are calling it LayerSkip and you can see the Github Repo here.
LayerSkip achieves significant speedups across various tasks, including summarization, coding, and semantic parsing, showcasing its potential for improving the efficiency of LLMs.
To be honest, it’s a bit hard for me to follow all the details on this when I was reading through the repo. But generally it’s pretty cool to see that Meta continues to find ways to improve LLMs and they consistently are sharing their work with the open source community. It gives folks like myself a chance to play with these things and learn in a way that we could not otherwise.
Vinxi
Last but not least, I was introduced to this thing called Vinxi. Vinxi is advertising itself as “The Full Stack Javascript SDK”. So what exactly is it?
It looks like Vinxi provides a way for us to create our own framework. It is the meta framework we need in order to build our own framework. This is incredible! I got really excited when I read these two articles.
These articles give nice tutorials on how one can use Vinxi to create their own framework.
At the beginning of this year I was extremely interested in understanding React Server Components down to its core. I spend time on nights and weekends trying to watch YouTube videos on how to build my own application using React Server Components from scratch. Meaning I didn’t want to use Next.js or any other framework. I wanted to implement React Server Components myself at the server and build levels. Well after following a couple tutorials and piecing things together I barely got something working but it wasn’t straightforward at all. It was going to require writing a lot more code. So I tabled the idea.
But now it seems with Vinxi that I could and should try building my own framework on top of Vinxi for React Server Components.
Why create your own? I’m not a fan of magic in code, some folks are. When I use Next.js there are a lot of things that are magical. Some of that magic is good, and some I believe to be pretty confusing. I’d rather have less magic and understand every piece in the event something bad happens I can fix it.
I look forward to taking Vinxi for a spin soon.
Thank you for reading this post! If you enjoyed it, feel free to follow me below and visit my website for more content.