Key Responsibilities
- Design, develop, and fine-tune local Large Language Models (LLMs) using frameworks like TensorFlow and PyTorch to ensure high performance and accuracy in real-world applications.
- Build and optimize custom Retrieval-Augmented Generation (RAG) systems by integrating LLMs with enterprise data sources through robust ETL processes and database design.
- Implement unsupervised learning techniques to enhance model capabilities, leveraging big data tools like Hadoop, Spark, and Talend for scalable data processing.
- Deploy AI models efficiently across cloud platforms such as AWS, ensuring scalability, security, and reliability throughout the model lifecycle.
- Develop natural language processing solutions to improve language understanding and generation using machine learning frameworks and NLP techniques.
- Collaborate with cross-functional teams to translate business needs into technical solutions and ensure seamless integration with existing systems.
Requirements
- Proficiency in machine learning frameworks such as TensorFlow, PyTorch, or similar tools for model training and deployment.
- Strong knowledge of natural language processing (NLP), including transformer architectures and language modeling techniques.
- Experience with big data technologies including Hadoop, Spark, Kafka, and ETL tools like Talend for data ingestion and processing.
- Familiarity with cloud computing platforms like AWS for deploying scalable AI solutions.
- Programming expertise in Python, Java, C, Bash (Unix shell), and VBA for automation and system integration tasks.