Skip to main content

Posts

Featured

TechBytes on Linux

This is a growing list of Linux commands which might come handy for the of Linux users. 1. Found out i had to set the date like this:    # date -s 2007.04.08-22:46+0000 2. Mounting     sudo mount -t cifs // < pingable_host_or_ip > / < win_share_name > /build -o user= ,domain= ,uid=string,gid=string 3. To install linux packages from internet (ubuntu only)     apt-get install 4. To determine what ports the machine is currently listening on     netstat -an | grep -i listen | less 5. Find in files in Linux     find . | xargs grep 'string' -sl Find file names with a pattern & delete them find . -name "IMG_*(1).JPG" -delete 6. To become superuser/root     sudo -i 7. To find a running process using name ps -aef | grep "searchstring" 8. Alt + F2 opens run window in RHEL 9. To access windows share from linux smb:// /d$ 10. To know the last reboot date & time $ last reboot | head -1 ...
Recent posts

Google 1/O 2025 summary

  Google just dropped their  biggest Al updates ever during Google 1/O 2025.  Here are 13 new Al updates you can't miss: Gemini Live. You can now turn on your camera, point at anything, and talk to Gemini about it in real time Imagen. Google's best image model yet Veo 3. The first video model with native sound generation Deep Research Project Astra. A JARVIS-like research prototype exploring the capabilities of a universal Al assistant Google Flow. Al filmmaking tool for creators Agent Mode. A new feature in the Gemini app that lets you state a goal, and Gemini will handle the steps to achieve it Google Jules. Jules is an Al-powered coding assistant that can read your code, write tests, fix bugs, and update dependencies Al Mode in Search. Al Mode transforms Google Search into a conversational assistant Real-time speech translation in Google Meet Google Beam. An Al-first video communication platform that turns 2D video stream...

MCP vs RAG (Model Context Protocol vs Retrieval Augmented Generation)

RAG (Retrieval-Augmented Generation) focuses on enhancing AI responses by retrieving external data, while MCP (Model Context Protocol) standardizes how AI interacts with various data sources and tools. Overview of RAG Definition: RAG is an AI architecture that improves the accuracy and relevance of responses generated by large language models (LLMs) by pulling in up-to-date information from external sources, such as databases or APIs, before generating a reply. 2 Functionality: When a user submits a query, RAG retrieves relevant content from connected data sources and appends this information to the input prompt, enriching the model's context with real-world relevance. This helps reduce inaccuracies and hallucinations in AI responses by grounding them in verifiable sources. 2 Use Cases: RAG is particularly useful in scenarios where real-time data is crucial, such as customer support, news aggregation, and any application requiring current information. 3 Sources Overview of MCP Defi...

PostgreSQL list of commands

PostgreSQL, or Postgres, is an object-relational database management system that utilizes the SQL language. PSQL is a powerful interactive terminal for working with the PostgreSQL database. It enables users to execute queries efficiently and manage databases effectively. Here, we highlight some of the most frequently used PSQL commands, detailing their functionalities to enhance your PostgreSQL experience. Top PSQL Commands in PostgreSQL Here are the top 22 PSQL commands that are frequently used when querying a PostgreSQL database: Serial No.CommandDescription1 psql -d database -U user -W Connects to a database under a specific user 2 psql -h host -d database -U user -W Connect to a database that resides on another host 3 psql -U user -h host "dbname=db sslmode=require" Use SSL mode for the connection 4 \c dbname Switch connection to a new database 5 \l List available databases 6 \dt List available tables 7 \d table_name Describe a table such as a column, type, modifiers of c...

How to deply Ollama & open web-ui on your laptop

How to deploy Ollama  Installation: Download Ollama:   Get the Ollama package from the  GitHub repository .   Install Dependencies:   Ensure you have any required dependencies, including libraries for your specific model.   Verify Installation:   Use  ollama --version  to confirm Ollama is installed correctly.   2. Model Deployment and Usage: Pull the Model:   Use the  ollama pull <model_name>  command to download the desired model.   Run the Model:   Use  ollama run <model_name>  to initiate the model's execution.   Interacting with the Model:   Ollama provides an API at  http://localhost:11434/api/generate  for interacting with the model.   Optional: Web UI:   Explore  Open WebUI  for a user-friendly interface to manage and interact with models.   Optional: Custom Applications:   Build custom applications using libraries like FastAPI and Gr...