<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Posts on Ravi Rai</title>
    <link>/posts/</link>
    <description>Recent content in Posts on Ravi Rai</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Wed, 11 Feb 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="/posts/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Complete Guide: Setting Up Ollama on Intel GPU with Intel Graphics Package Manager</title>
      <link>/posts/complete-guide-setting-up-ollama-on-intel-gpu/</link>
      <pubDate>Wed, 11 Feb 2026 00:00:00 +0000</pubDate>
      <guid>/posts/complete-guide-setting-up-ollama-on-intel-gpu/</guid>
      <description>&lt;p&gt;I remember using ChatGPT for the first time to write a reply when I received appreciation from the leadership team for my work in my previous company. Nowadays, it is part of day-to-day life; AI has made my life easier. I was wondering what if we can run LLM locally on my laptop. I installed Ollama desktop for Windows on my Laptop. My laptop with just 16 GB RAM was working fine with small models with basic email writing tasks. Using a model with 1b parameters and my regular apps like Teams, Chrome, etc., my laptop was frequently becoming unresponsive. On my another Laptop with a dedicated graphics card, I was able to run models up to 8b parameters smoothly.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
