Large Language Models (LLMs) are powerful, but they have one major limitation: they rely solely on the knowledge they were trained on. This means they lack real-time, domain-specific updates unless retrained, an expensive and impractical process. This is where Retrieval-Augmented Generation (RAG)
Read More

Once upon a time, coding meant sitting down, writing structured logic, and debugging for hours. Fast-forward to today, and we have Vibe Coding, a trend where people let AI generate entire chunks of code based on simple prompts. No syntax, no debugging,
Read More

Ollama is one of the easiest ways for running large language models (LLMs) locally on your own machine. It’s like Docker. You download publicly available models from Hugging Face using its command line interface. Connect Ollama with a graphical interface and you
Read More

Ever since I realized that AI was shaping the future, I’ve been fascinated by its endless possibilities. I’m someone who enjoys testing large language models (LLMs) on my devices, and the open-source approach to data has always been my preference. Why? Because
Read More

Since the launch of DeepSeek AI, every tech media outlet has been losing its mind over it. It’s been shattering records, breaking benchmarks, and becoming the go-to name in AI innovation. DeepSeek v/s OpenAI benchmark | Source: Brian Roemmele Recently, I stumbled
Read More

Programming is the one area where AI is being used extensively. Most editors allow you to add AI agents like chatGPT, Microsoft’s Copilot etc. There are also several open source large language models specifically centered around coding like CodeGemma. And then we
Read More

As artificial intelligence continues to weave into our daily lives, there’s a noticeable shift towards smaller, more efficient language models that can run locally on devices. SmolLM, part of a growing trend in compact language models, is a prime example, showing that
Read More