Description

Large Language Models (LLMs) and Multimodels

Foundation Model Architectures (Transformers, Encoder/Decoder)

API integration from providers like (e.g., OpenAI, Cohere).

Model tuning pipeline development

Prompt engineering

Reinforcement learning from human feedback (RLHF).

Frameworks : PyTorch, TensorFlow,LangChain, LlamaIndex

Strong programming skills in Python, and experience with data engineering workflows (Spark, Airflow, SQL).


 

Hands on experience in

Fine-tuning & Prompt Engineering (LoRA, PEFT)

Retrieval-Augmented Generation (RAG) pipelines

Python: NumPy, Pandas, Scikit-learn, HuggingFace, OpenAI API


 

Knowledge of AI ethics, explainability, and governance in generative models


 

Secondary skills:


 

Experience with vector databases (e.g., FAISS, Weaviate, Pinecone) and scalable RAG systems

Familiarity with GPU compute infrastructure and distributed model training

MLOps for LLMs: Deployment, Monitoring, Versioning

Classical Machine learning like Supervised Learning, UnSupervised learning models

Education

Any Gradute