Featured

TechBytes on Linux

This is a growing list of Linux commands which might come handy for the of Linux users. 1. Found out i had to set the date like this: ...

Thursday, May 22, 2025

Google I/O 2025 summary

 Google just dropped their biggest Al updates ever during Google I/O 2025. 

Here are 13 new Al updates you can't miss:

  1. Gemini Live. You can now turn on your camera, point at anything, and talk to Gemini about it in real time
  2. Imagen. Google's best image model yet
  3. Veo 3. The first video model with native sound generation
  4. Deep Research
  5. Project Astra. A JARVIS-like research prototype exploring the capabilities of a universal Al assistant
  6. Google Flow. Al filmmaking tool for creators
  7. Agent Mode. A new feature in the Gemini app that lets you state a goal, and Gemini will handle the steps to achieve it
  8. Google Jules. Jules is an Al-powered coding assistant that can read your code, write tests, fix bugs, and update dependencies
  9. Al Mode in Search. Al Mode transforms Google Search into a conversational assistant
  10. Real-time speech translation in Google Meet
  11. Google Beam. An Al-first video communication platform that turns 2D video streams into realistic 3D experiences
  12. Gemma 3n. A new open-source Al model optimized for mobile devices
  13. Try-On. Google's Virtual Try-On feature lets you upload a photo of yourself to see how clothes would look on you

What are your thoughts on this?

Free MCP model context protocol course

Worth investing time on learning the Model context protocol using a free course provided by huggingface.

 https://huggingface.co/learn/mcp-course/unit0/introduction

Wednesday, May 14, 2025

MCP vs RAG (Model Context Protocol vs Retrieval Augmented Generation)



RAG (Retrieval-Augmented Generation) focuses on enhancing AI responses by retrieving external data, while MCP (Model Context Protocol) standardizes how AI interacts with various data sources and tools.

Overview of RAG
Scope: RAG is a specific method focused on improving the accuracy of LLM outputs by grounding them in external knowledge, while MCP is a broader protocol that standardizes interactions between AI and various data systems.

1

Data Retrieval: RAG retrieves external data each time a query is made, whereas MCP allows LLMs to access contextual memory and external data more efficiently, reducing the need for repeated data retrieval.

2

Integration: RAG requires specific setups for each data source, while MCP provides a universal framework that simplifies the integration of multiple data sources and tools into AI applications.

3 Sources
Conclusion
Both RAG and MCP play significant roles in enhancing AI capabilities, but they serve different purposes. RAG is ideal for applications needing real-time data retrieval to improve response accuracy, while MCP offers a standardized approach for integrating various tools and data sources, making it easier to build complex AI systems. Understanding these differences is crucial for developers and organizations looking to leverage AI effectively in their applications.

PostgreSQL list of commands

PostgreSQL, or Postgres, is an object-relational database management system that utilizes the SQL language. PSQL is a powerful interactive terminal for working with the PostgreSQL database. It enables users to execute queries efficiently and manage databases effectively.

Here, we highlight some of the most frequently used PSQL commands, detailing their functionalities to enhance your PostgreSQL experience.
Top PSQL Commands in PostgreSQL

Here are the top 22 PSQL commands that are frequently used when querying a PostgreSQL database:
Serial No.CommandDescription1 psql -d database -U user -W Connects to a database under a specific user
2 psql -h host -d database -U user -W Connect to a database that resides on another host
3 psql -U user -h host "dbname=db sslmode=require" Use SSL mode for the connection
4 \c dbname Switch connection to a new database
5 \l List available databases
6 \dt List available tables
7 \d table_name Describe a table such as a column, type, modifiers of columns, etc.
8 \dn List all schemes of the currently connected database
9 \df List available functions in the current database
10 \dv List available views in the current database
11 \du List all users and their assign roles
12 SELECT version(); Retrieve the current version of PostgreSQL server
13 \g Execute the last command again
14 \s Display command history
15 \s filename Save the command history to a file
16 \i filename Execute psql commands from a file
17 \? Know all available psql commands
18 \h Get help
19 \e Edit command in your own editor
20 \a Switch from aligned to non-aligned column output
21 \H Switch the output to HTML format
22 \q Exit psql shell

Additional Information:
The -d option in psql commands is used to state the database name.
The -U option specifies the database user.
The -h option indicates the host on which the database server resides.
The \h ALTER TABLE can be used to get detailed information on the ALTER TABLE statement.

Friday, May 9, 2025

How to deply Ollama & open web-ui on your laptop

How to deploy Ollama
 Installation:
  • Download Ollama: Get the Ollama package from the GitHub repository. 
  • Install Dependencies: Ensure you have any required dependencies, including libraries for your specific model. 
  • Verify Installation: Use ollama --version to confirm Ollama is installed correctly. 
2. Model Deployment and Usage:
  • Pull the Model: Use the ollama pull <model_name> command to download the desired model. 
  • Run the Model: Use ollama run <model_name> to initiate the model's execution. 
  • Interacting with the Model: Ollama provides an API at http://localhost:11434/api/generate for interacting with the model. 
  • Optional: Web UI: Explore Open WebUI for a user-friendly interface to manage and interact with models. 
  • Optional: Custom Applications: Build custom applications using libraries like FastAPI and Gradio to integrate Ollama models into your workflows. 

How to deploy open-webui

Open WebUI is an extensible, feature-rich, and user-friendly self-hosted AI platform designed to operate entirely offline. It supports various LLM runners like Ollama and OpenAI-compatible APIs, with built-in inference engine for RAG, making it a powerful AI deployment solution.

How to Install 🚀

Installation via Python pip 🐍

Open WebUI can be installed using pip, the Python package installer. Before proceeding, ensure you're using Python 3.11 to avoid compatibility issues.

  1. Install Open WebUI: Open your terminal and run the following command to install Open WebUI:

    pip install open-webui
  2. Running Open WebUI: After installation, you can start Open WebUI by executing:

    open-webui serve

This will start the Open WebUI server, which you can access at http://localhost:8080



To upgrade the Open-webui components

pip install open-webui --upgrade

Sunday, April 20, 2025

Run windows apps through Wine emulator on Mac OS Apple Silicon

 https://macappstore.org/whiskey/ 

About the App

Install the App

  1. Press Command+Space and type Terminal and press enter/return key.
  2. Copy and paste the following command in Terminal app:
    /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" 
    and press enter/return key. Wait for the command to finish. If you are prompted to enter a password, please type your Mac user's login password and press ENTER. Mind you, as you type your password, it won't be visible on your Terminal (for security reasons), but rest assured it will work.
  3. Now, copy/paste and run this command to make brew command available inside the Terminal: echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile
  4. Copy and paste the following command:
    brew install --cask whiskey

Done! You can now use Whiskey.

Refer this video on the steps to use the app on Mac OS Apple Silicon laptops

https://www.youtube.com/watch?v=8KeES5llh9I&t=82s 


pivot_root vs chroot vs switch_root

The chroot command modifies the root directory for a process, limiting its access to the rest of the filesystem. This is useful for security, containerization, or testing purposes. The process running under chroot has no knowledge of anything outside its jail, making it appear as if it is running on the root filesystem.

123

In Linux, pivot_root, chroot, and switch_root are commands used to change the root filesystem of a process. Each has its specific use cases and functionalities.

pivot_root

The pivot_root command is used to change the root filesystem of the current process and its children. It swaps the current root filesystem with a new one, making the old root accessible at a specified location. This command is typically used during the boot process when the system transitions from an initial ramdisk (initrd) to the real root filesystem
1.

Example:

mount /dev/hda1 /new-root
cd /new-root
pivot_root . old-root
exec chroot . sh <dev/console >dev/console 2>&1
umount /old-root

In this example, the root filesystem is changed to /new-root, and the old root is accessible at /old-root3.

chroot

The chroot command changes the root directory for the current process and its children to a specified directory. Unlike pivot_root, chroot does not swap the root filesystem but simply changes the reference point for the current process. This command is often used to create isolated environments, such as chroot jails.

2.

Example:
chroot /new-root /bin/bash
In this example, the root directory for the current shell is changed to /new-root, and /bin/bash is executed within this new root2.

switch_root

The switch_root command is used to switch from an initial ramdisk (initramfs) to the real root filesystem. It is similar to pivot_root but is specifically designed for use with initramfs. switch_root performs additional cleanup tasks, such as moving common mount points (/dev, /proc, /sys, etc.) into the new root and attempting to delete everything in the old root
1.

Example:
switch_root /new-root /sbin/init

In this example, the root filesystem is switched to /new-root, and /sbin/init is executed as the new init process
1.

Key Differences

pivot_root: Swaps the current root filesystem with a new one, making the old root accessible. Used during the boot process with initrd.

chroot: Changes the root directory for the current process without swapping the root filesystem. Used for creating isolated environments.

switch_root: Switches from initramfs to the real root filesystem, performing additional cleanup tasks. Used during the boot process with initramfs.

Use Cases

pivot_root: Used when you need to preserve the original root for some purpose, such as during the boot process with initrd1.


chroot: Used to create isolated environments, such as chroot jails, for security or testing purposes2.


switch_root: Used to switch from initramfs to the real root filesystem during the boot process, performing additional cleanup tasks1.

By understanding the differences and use cases of these commands, you can choose the appropriate one for your specific needs in managing the root filesystem of a process.

Friday, February 14, 2025

Vim Copy & Paste Terminology

The keyboard shortcuts to copy, cut, and paste can be boiled down into three characters that utilize Vim-specific terminology.

Understanding these terms will help you recall the correct keyboard shortcut.

Y stands for “yank” in Vim, which is conceptually similar to copying.

D stands for “delete” in Vim, which is conceptually similar to cutting.

P stands for “put” in Vim, which is conceptually similar to pasting.

I deliberately use the phrase “conceptually similar to” because these actions are not one and the same. If you want to dive deeper into this explanation, scroll down to the section below titled “What Happens Under the Hood?”

Copy, Cutting, and Pasting in Vim/Vi - The Basics

1.Press esc to return to normal mode. Any character typed in normal mode will be interpreted as a vim command.

2.Navigate your cursor to the beginning of where you want to copy or cut.

3.To enter visual mode, you have 3 options. We suggest using visual mode because the selected characters are highlighted, and it’s clearer to see what’s happening. However, we have the keyboard shortcuts for normal mode (which achieve the exact same effect) in the section below.

4.Press v (lowercase) to enter visual mode. This will start selecting from where your cursor is.

5.Press V (uppercase) to enter visual line mode. This will select the entire line.

6.Press CTRL+V to enter visual block mode.

7.Move your cursor to the end of where you want to copy or cut.

8.Press y to copy. Press d to cut.

9.Move the cursor to where you want to paste your selection.

10.Press P (uppercase) to paste before your cursor. Press p (lowercase) to paste after your cursor.

Move the cursor to where you want to paste your selection.

Press P (uppercase) to paste before your cursor. Press p (lowercase) to paste after your cursor.


Vim Keyboard Shortcuts

Using arrow keys (or if you are an expert Vim user - h, j, k, and l) to move around in your vim file can take a long time.

Here are vim keyboard shortcuts for copying and cutting if you want to be even more efficient than the basic steps outlined above.

Copying (Yanking)

yy: Copy the current line in vi

3yy: To yank multiple lines in vim, type in the number of lines followed by yy. This command will copy (yank) 3 lines starting from your cursor position.

y$: Copy everything from the cursor to the end of the line

y^: Copy everything from the start of the line to the cursor.

yiw: Copy the current word.

Cutting (Deleting)

dd: Cut the current line

3dd: Cut 3 lines, starting from the cursor

d$: Cut everything from the cursor to the end of the line

Putting (Pasting)

P (uppercase): Paste before your cursor

p (lowercase): Paste after your cursor

What Happens Under the Hood?

Vim terms are slightly different from their conceptual counterparts that we mentioned above because these actions by default do not interact with the OS clipboard. For example, you can't paste yanked text from Vim into a different application with CMD + V.


Yank, delete, and put interact with Vim's notion of “registers”, which are basically Vim -specific clipboards. Each register is named with a character, which you can use to interact with that register. For example, you might yank line 50 of the current file to register “a” and yank line 14 of the same file to register “b”, because you intend to paste both line 50 and line 14.

To learn more about vim registers, check out this Stack Overflow page.

Conclusion

This should be everything you need to get started to copy, cut and paste in Vi. If you didn’t find what you were looking for, it may be worth checking out the official vim docs.

Tuesday, January 28, 2025

DeepSeek R1: A Technical Deep Dive into the Next-Gen AI Search and Conversational Tool

 Artificial intelligence has become a cornerstone of modern technology, with tools like DeepSeek R1 and ChatGPT leading the charge in transforming how we interact with machines. While both are powered by advanced AI, they cater to different use cases and employ distinct technical architectures. In this article, we’ll explore the technical underpinnings of DeepSeek R1, compare it with ChatGPT, and highlight their unique capabilities.

---

What is DeepSeek R1?

DeepSeek R1 is an AI-driven search and conversational platform designed to deliver real-time, context-aware, and highly personalized results. Unlike traditional search engines, which rely on keyword matching and static datasets, DeepSeek R1 leverages cutting-edge natural language processing (NLP), machine learning (ML), and real-time data integration to provide dynamic and accurate responses.

The "R1" in its name stands for Real-time, Relevance, and Reliability, reflecting its core strengths. It is built to handle complex queries, process multimodal inputs (text, images, audio, and video), and integrate seamlessly with external systems, making it a versatile tool for both individual and enterprise use.

Technical Architecture of DeepSeek R1

1. Natural Language Processing (NLP) Engine

   - Transformer-Based Models: DeepSeek R1 utilizes transformer-based architectures, similar to those used in models like GPT and BERT, to understand and generate human-like text. These models are trained on massive datasets to capture the nuances of language.

   - Contextual Embeddings: Unlike traditional word embeddings (e.g., Word2Vec), DeepSeek R1 employs contextual embeddings (e.g., BERT-style embeddings) to understand the meaning of words in context. This allows it to handle ambiguous queries and provide more accurate results.

   - Intent Recognition: DeepSeek R1 uses advanced intent recognition algorithms to classify user queries into specific categories (e.g., informational, navigational, transactional). This helps tailor responses to the user’s needs.

2. Real-Time Data Processing

   - Streaming Data Pipelines: DeepSeek R1 is equipped with streaming data pipelines that allow it to process and analyze real-time data from various sources, such as APIs, databases, and IoT devices.

   - Dynamic Knowledge Graphs: It constructs and updates knowledge graphs in real-time, enabling it to connect disparate pieces of information and provide comprehensive answers.

   - Caching Mechanisms: To ensure low latency, DeepSeek R1 employs intelligent caching mechanisms that store frequently accessed data while still prioritizing real-time updates.

3. Multimodal Capabilities

   - Cross-Modal Learning: DeepSeek R1 is trained on multimodal datasets, allowing it to understand and generate responses based on text, images, audio, and video inputs. For example, it can analyze an image and provide a textual description or answer questions about a video.

   - Unified Embedding Space: It uses a unified embedding space to represent different modalities (e.g., text and images) in a shared vector space, enabling seamless cross-modal interactions.

4. Personalization and User Modeling

   - Reinforcement Learning (RL): DeepSeek R1 employs RL techniques to learn from user interactions and improve its responses over time. This allows it to adapt to individual preferences and behaviors.

   - User Profiling: It builds detailed user profiles by analyzing historical interactions, search patterns, and preferences. These profiles are used to deliver personalized recommendations and responses.

 5. Integration with External Systems

   - API-First Design: DeepSeek R1 is built with an API-first approach, making it easy to integrate with third-party platforms, enterprise systems, and cloud services.

   - Data Connectors: It includes pre-built connectors for popular data sources, such as CRM systems, social media platforms, and IoT devices, enabling it to pull data from multiple sources.

---

 DeepSeek R1 vs. ChatGPT: A Technical Comparison

While both DeepSeek R1 and ChatGPT are built on transformer-based architectures, they differ significantly in their design, training, and application. Here’s a detailed technical comparison:

 1. Model Architecture

   - DeepSeek R1: Uses a hybrid architecture that combines transformer-based NLP models with real-time data processing pipelines and knowledge graphs. This allows it to handle both static and dynamic data effectively.

   - ChatGPT: Primarily relies on a transformer-based generative model (GPT-3.5 or GPT-4) trained on a large corpus of text data. It excels at generating coherent and contextually relevant text but lacks real-time data integration.

 2. Training Data

   - DeepSeek R1: Trained on a combination of static datasets and real-time data streams. This enables it to provide up-to-date information and adapt to changing contexts.

   - ChatGPT: Trained on a fixed dataset up to its last update (e.g., October 2023 for GPT-4). While it has a broad knowledge base, it cannot access or process real-time data.

 3. Use Cases

   - DeepSeek R1: Optimized for search, data analysis, and personalized recommendations. Its real-time capabilities make it ideal for applications like financial analysis, healthcare diagnostics, and e-commerce.

   - ChatGPT: Designed for conversational AI, content generation, and customer support. It is widely used for tasks like drafting emails, writing code, and answering general knowledge questions.

 4. Interaction Style

   - DeepSeek R1: Focuses on precision and relevance. Its responses are concise, data-driven, and tailored to the user’s intent.

   - ChatGPT: Emphasizes engagement and creativity. It can generate longer, more detailed responses and is capable of storytelling, brainstorming, and humor.

 5. Integration Capabilities

   - DeepSeek R1: Built for seamless integration with external systems, making it a powerful tool for enterprise applications. It supports APIs, data connectors, and cloud integrations.

   - ChatGPT: While it can be integrated into various platforms, its primary strength lies in standalone conversational applications.

---

 Applications of DeepSeek R1

DeepSeek R1’s technical capabilities make it suitable for a wide range of applications, including:

1. Enterprise Search: Enhancing internal search engines by providing real-time, context-aware results.

2. E-Commerce: Delivering personalized product recommendations based on user behavior and preferences.

3. Healthcare: Assisting in diagnostics by analyzing patient data and medical literature in real-time.

4. Finance: Providing up-to-date market analysis, risk assessments, and investment recommendations.

5. Customer Support: Offering instant, accurate responses to customer queries by integrating with CRM systems.

 The Future of AI: DeepSeek R1 and Beyond

DeepSeek R1 represents a significant leap forward in AI-powered search and conversational tools. Its ability to process real-time data, understand context, and deliver personalized results sets it apart from traditional AI models like ChatGPT. As AI continues to evolve, tools like DeepSeek R1 will play a crucial role in bridging the gap between humans and machines, enabling smarter decision-making and more intuitive interactions.

In conclusion, while ChatGPT excels in creative and conversational tasks, DeepSeek R1 is designed for precision, real-time data processing, and enterprise integration. Together, these tools showcase the diverse potential of AI, paving the way for a future where technology is more intelligent, adaptive, and human-centric.

Sunday, December 1, 2024

Power of AI - Podcast about my tech blog techbytes-madhukar.com


The podcast is auto-generated by https://notebooklm.google.com 

Techbytes-madhukar.com is a blog created by Madhukar Rupakumar where he shares his insights and findings on various technology-related topics. [1] The blog features articles categorized by labels such as ".NET", "AI", "Apple products", "Blockchain", "Cloud technology", and many more. [2] Rupakumar, a Principal Systems Engineer at Hewlett Packard Enterprise with expertise in storage products, uses his platform to discuss a wide array of subjects related to technology and software. [1]

The blog contains posts covering topics like:Linux commands for beginners. [3]
Interview preparation guides for software engineers. [4]
Free AI/ML LLM Fundamentals Courses. [5]
Cloud computing and data storage terminology. [6]
Free courses on various topics such as Generative AI, React, Angular, SEO, and data science. [7]
Learning resources for data structures and algorithms. [8]

The blog also includes a section where Rupakumar shares details about his professional background and interests.

Popular Posts