The Junior AI Engineer will contribute to the development of enterprise-grade Generative AI systems and intelligent automation platforms. This role involves building Retrieval-Augmented Generation workflows, integrating Large Language Models into scalable applications, optimizing semantic search systems, and supporting cloud-native AI deployments. The position requires strong technical curiosity, structured problem-solving ability, and the capability to work on rapidly evolving AI technologies in production-focused environments.
Key Responsibilities:
- Develop intelligent AI workflows powered by Large Language Models and retrieval-based architectures
- Build and maintain Retrieval-Augmented Generation pipelines using vector search and embedding technologies
- Create semantic search systems capable of delivering context-aware and accurate AI responses
- Design prompt engineering strategies for structured outputs, contextual grounding, and response optimization
- Configure inference workflows and optimize AI response generation performance
- Implement reasoning-oriented AI workflows using grounding methods and chain-based prompting techniques
- Support AI model enhancement activities through reinforcement learning concepts and tuning strategies
- Integrate AI and LLM services into scalable backend systems through APIs and microservice-based architectures
- Deploy and manage AI-driven applications on Google Cloud Platform environments
- Develop scalable storage and retrieval mechanisms using PostgreSQL and Firestore databases
- Build and maintain RESTful APIs along with technical documentation using Swagger and Postman
- Collaborate with engineering teams using GitHub-based version control and code review workflows
- Research emerging AI frameworks, language models, and advanced tooling for production adoption
- Improve reliability, scalability, and operational efficiency of AI-powered systems
- Monitor application performance, optimize resource utilization, and support production stability initiatives
- Participate in architecture discussions, technical planning, and AI innovation activities
- Maintain technical records, workflow documentation, and deployment references for long-term maintainability
- Contribute to continuous improvement practices focused on AI quality, performance, and automation readiness
Required Skills:
- Strong hands-on experience with Generative AI and LLM-powered application development
- Knowledge of Retrieval-Augmented Generation workflows and vector-based information retrieval systems
- Experience working with embeddings, semantic search, and contextual AI architectures
- Understanding of prompt engineering techniques, grounding methods, and inference parameter optimization
- Familiarity with reasoning-driven AI workflows and structured response generation approaches
- Practical programming expertise in Python for AI and backend development
- Experience building and integrating APIs for scalable AI services
- Understanding of distributed application architecture and backend system design principles
- Knowledge of PostgreSQL, Firestore, and scalable data management concepts
- Hands-on exposure to Google Cloud Platform services and cloud-native deployment workflows
- Familiarity with GitHub, collaborative development practices, and repository management
- Experience using Postman and Swagger for API testing and technical documentation
- Strong analytical thinking, debugging ability, and problem-resolution skills
- Ability to manage technically complex assignments with accountability and ownership
- Good communication skills and collaborative mindset for cross-functional teamwork
- Adaptability to evolving AI technologies, frameworks, and production requirements
Preferred Skills:
- Experience with vector databases such as Pinecone, Milvus, Chroma, or similar platforms
- Exposure to enterprise AI copilots, intelligent assistants, or conversational AI ecosystems
- Familiarity with fine-tuning workflows and evaluation methods for Large Language Models
- Understanding of MLOps practices, deployment automation, and AI lifecycle management
- Awareness of CI/CD workflows, containerization, and scalable deployment strategies
- Exposure to healthcare-focused AI solutions or regulated enterprise environments
- Interest in AI research, emerging frameworks, and advanced language model ecosystems
- Understanding of observability, monitoring, and AI system optimization practices
- Familiarity with cloud-based distributed systems and intelligent workflow orchestration
Education:
B.Tech / B.E. / MCA / M.Tech / BCA / B.Sc. / M.Sc. in Computer Science, Artificial Intelligence, Data Science, Information Technology, Software Engineering, Cloud Computing, or a related technical discipline from a recognized institution or university.