With tools like Ollama and open-source LLMs, you can create natural language, data-driven systems without relying on costly, non-compliant APIs. Learn to unlock the potential of local LLMs, such as Llama, mistral or Gemma for local and secure biomedical data analysis.
Over the past few years, numerous packages and methods have emerged, making the concepts of ‘Prompting,’ ‘RAG,’ and ‘Agent’ increasingly complex and confusing. But what if you could effortlessly launch a local LLMs-RAG system with ease?
In this session, you’ll learn how to build intelligent agents, integrate a Retrieval-Augmented Generation (RAG) system for querying biomedical datasets, and create user-friendly tools for non-coders using Streamlit.
Key Takeaways:
- Run local LLMs for free with Ollama: Set up powerful models without API calls to external services.
- Develop a RAG system: Ground your LLM’s answers in specific single-cell datasets of your choice.
- Agent-driven data analysis: Create agents that map user questions to actionable queries and execute them.
- Streamlit app for non-coders: Build a chat-based UI for easy access to analysis tools.
What We’ll Demonstrate:
- How to set up Ollama on a Code Ocean Capsule with GPU support using open-source LLMs running locally.
- Create a vector database using Ollama embeddings and query it with natural language.
- Build an agent that uses gene expression data and biological models to answer specific questions.
- Quickly design a UI around your LLM-RAG-Agent system using Streamlit.
Who Should Listen?
This webinar is tailored for computational scientists and biologists eager to leverage artificial intelligence in their workflows, and in particular LLMs and agentic systems for biology.