2026

2025

LLM Interview Questions

LLM

Hyperparameters are external settings chosen before training, such as the learning rate or regularization strength.

LLM Training Epoch

LLM

As large language models (LLMs) scale up, researchers have begun to notice a growing imbalance between model size and the availability of high-quality...

vllm throughput

LLM

In large-language-model (LLM) inference serving contexts, once the model compute becomes sufficiently fast, the performance bottleneck often shifts to...

LangGraph Sample Project

LLM

[x] Independent deployable services - Each agent can scale horizontally (e.g., analysisservice replicas) - You can version and deploy agents...

FastMCP MCP Server Hub

LLM

MCP Server Hub Currently, our different projects are using various MCP servers. To streamline and unify the process, we plan to implement a HUB MCP...

How LLM Tools work

LLM

Tools in Large Language Models (LLMs) Tools enable large language models (LLMs) to interact with external systems, APIs, or data sources, extending...

LangChain Retry Logic

LLM

LangChain Invoke Retry Logic LLM call is not stable and may fail due to network issues or other reasons, therefore, retry logic is necessary.

MCP Transports

LLM

| Feature | stdio | sse (Server-Sent Events) | streamable-http | |--------------------------|------------------------------------------|--------------...

Text to SQL (Smolagents)

LLM

Out: None [Step 1: Duration 146.87 seconds| Input tokens: 2,113 | Output tokens: 923] ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ Step 2...

MCP Server & Client (SSE)

LLM

Step-by-Step Guide: Building an MCP Server using Python-SDK, AlphaVantage & Claude AI Model Context Protocol (MCP) lab

RAG-Reranking

LLM

Retrieval-Augmented Generation (RAG) is a powerful approach that combines retrieval and generation to produce high-quality responses. However, the...

GenAI Projects

LLM

Learning never exhausts the mind         ― Leonardo da Vinci

LangGraph VS AutoGen

LLM

|Feature| LangGraph| AutoGen| |---|---|---| |Core Concept| Graph-based workflow for LLM chaining| Multi-agent system with customizable agents|...

Local LLM Setup

LLM

If you find this in your VSCode, congratulations! You have successfully set up Ollama for code generation and assistance in Visual Studio Code. alt...

2024

Databricks Wheel Job

python

My previous spark project is scala based and I use IDEA to compile and test conveniently.:smile::smile::smile: Databricks Job nice UI save your time...

ZIO

Scala

This video is helpful to understand it. type:video

Reflex Learning

python

Reflex (pynecone) Reflex is a library to build full-stack web apps in pure Python. Repo Video type:video

Model Registry

ML

Problem: How to introduce ml-based production/features to cross-functional teams.

2021

Setup Minikube

k8s

bin/spark-submit \ master k8s://https://192.168.99.100:8443 \ deploy-mode cluster \ name spark-pi \ class org.apache.spark.examples.SparkPi \ conf...

2020

Spark SQL

spark

```txt master MASTERURL --> 运行模式 例:spark://host:port, mesos://host:port, yarn, or local.

Spark Optimization

spark

PROCESSLOCAL data is in the same JVM as the running code. This is the best locality possible NODELOCAL data is on the same node. Examples might be in...

Airflow

airflow

import airflow from airflow.models import DAG from airflow.operators.pythonoperator import PythonOperator

Gradient Descent

ML

Vanilla gradient descent, aka batch gradient descent, computes the gradient of the cost function w.r.t. to the parameters θ

2012

Repo List

Repos Repo List language link

Light