Thursday, May 22, 2025

Google I/O 2025 summary

 Google just dropped their biggest Al updates ever during Google I/O 2025. 

Here are 13 new Al updates you can't miss:

  1. Gemini Live. You can now turn on your camera, point at anything, and talk to Gemini about it in real time
  2. Imagen. Google's best image model yet
  3. Veo 3. The first video model with native sound generation
  4. Deep Research
  5. Project Astra. A JARVIS-like research prototype exploring the capabilities of a universal Al assistant
  6. Google Flow. Al filmmaking tool for creators
  7. Agent Mode. A new feature in the Gemini app that lets you state a goal, and Gemini will handle the steps to achieve it
  8. Google Jules. Jules is an Al-powered coding assistant that can read your code, write tests, fix bugs, and update dependencies
  9. Al Mode in Search. Al Mode transforms Google Search into a conversational assistant
  10. Real-time speech translation in Google Meet
  11. Google Beam. An Al-first video communication platform that turns 2D video streams into realistic 3D experiences
  12. Gemma 3n. A new open-source Al model optimized for mobile devices
  13. Try-On. Google's Virtual Try-On feature lets you upload a photo of yourself to see how clothes would look on you

What are your thoughts on this?

Free MCP model context protocol course

Worth investing time on learning the Model context protocol using a free course provided by huggingface.

 https://huggingface.co/learn/mcp-course/unit0/introduction

Wednesday, May 14, 2025

MCP vs RAG (Model Context Protocol vs Retrieval Augmented Generation)



RAG (Retrieval-Augmented Generation) focuses on enhancing AI responses by retrieving external data, while MCP (Model Context Protocol) standardizes how AI interacts with various data sources and tools.

Overview of RAG
Scope: RAG is a specific method focused on improving the accuracy of LLM outputs by grounding them in external knowledge, while MCP is a broader protocol that standardizes interactions between AI and various data systems.

1

Data Retrieval: RAG retrieves external data each time a query is made, whereas MCP allows LLMs to access contextual memory and external data more efficiently, reducing the need for repeated data retrieval.

2

Integration: RAG requires specific setups for each data source, while MCP provides a universal framework that simplifies the integration of multiple data sources and tools into AI applications.

3 Sources
Conclusion
Both RAG and MCP play significant roles in enhancing AI capabilities, but they serve different purposes. RAG is ideal for applications needing real-time data retrieval to improve response accuracy, while MCP offers a standardized approach for integrating various tools and data sources, making it easier to build complex AI systems. Understanding these differences is crucial for developers and organizations looking to leverage AI effectively in their applications.

PostgreSQL list of commands

PostgreSQL, or Postgres, is an object-relational database management system that utilizes the SQL language. PSQL is a powerful interactive terminal for working with the PostgreSQL database. It enables users to execute queries efficiently and manage databases effectively.

Here, we highlight some of the most frequently used PSQL commands, detailing their functionalities to enhance your PostgreSQL experience.
Top PSQL Commands in PostgreSQL

Here are the top 22 PSQL commands that are frequently used when querying a PostgreSQL database:
Serial No.CommandDescription1 psql -d database -U user -W Connects to a database under a specific user
2 psql -h host -d database -U user -W Connect to a database that resides on another host
3 psql -U user -h host "dbname=db sslmode=require" Use SSL mode for the connection
4 \c dbname Switch connection to a new database
5 \l List available databases
6 \dt List available tables
7 \d table_name Describe a table such as a column, type, modifiers of columns, etc.
8 \dn List all schemes of the currently connected database
9 \df List available functions in the current database
10 \dv List available views in the current database
11 \du List all users and their assign roles
12 SELECT version(); Retrieve the current version of PostgreSQL server
13 \g Execute the last command again
14 \s Display command history
15 \s filename Save the command history to a file
16 \i filename Execute psql commands from a file
17 \? Know all available psql commands
18 \h Get help
19 \e Edit command in your own editor
20 \a Switch from aligned to non-aligned column output
21 \H Switch the output to HTML format
22 \q Exit psql shell

Additional Information:
The -d option in psql commands is used to state the database name.
The -U option specifies the database user.
The -h option indicates the host on which the database server resides.
The \h ALTER TABLE can be used to get detailed information on the ALTER TABLE statement.

Friday, May 9, 2025

How to deply Ollama & open web-ui on your laptop

How to deploy Ollama
 Installation:
  • Download Ollama: Get the Ollama package from the GitHub repository. 
  • Install Dependencies: Ensure you have any required dependencies, including libraries for your specific model. 
  • Verify Installation: Use ollama --version to confirm Ollama is installed correctly. 
2. Model Deployment and Usage:
  • Pull the Model: Use the ollama pull <model_name> command to download the desired model. 
  • Run the Model: Use ollama run <model_name> to initiate the model's execution. 
  • Interacting with the Model: Ollama provides an API at http://localhost:11434/api/generate for interacting with the model. 
  • Optional: Web UI: Explore Open WebUI for a user-friendly interface to manage and interact with models. 
  • Optional: Custom Applications: Build custom applications using libraries like FastAPI and Gradio to integrate Ollama models into your workflows. 

How to deploy open-webui

Open WebUI is an extensible, feature-rich, and user-friendly self-hosted AI platform designed to operate entirely offline. It supports various LLM runners like Ollama and OpenAI-compatible APIs, with built-in inference engine for RAG, making it a powerful AI deployment solution.

How to Install 🚀

Installation via Python pip 🐍

Open WebUI can be installed using pip, the Python package installer. Before proceeding, ensure you're using Python 3.11 to avoid compatibility issues.

  1. Install Open WebUI: Open your terminal and run the following command to install Open WebUI:

    pip install open-webui
  2. Running Open WebUI: After installation, you can start Open WebUI by executing:

    open-webui serve

This will start the Open WebUI server, which you can access at http://localhost:8080



To upgrade the Open-webui components

pip install open-webui --upgrade

Featured

TechBytes on Linux

This is a growing list of Linux commands which might come handy for the of Linux users. 1. Found out i had to set the date like this: ...

Popular Posts