degamit.com

Events

Using Machine Learning for filtering itineraries: A real-life Kiwi.com use case

Thomas Browne, Kiwi.com Lucie Blechova, Kiwi.com
In this workshop the participants will get an insight into how we utilize Machine Learning in practice in Kiwi.com. We will introduce a business case about filtering itineraries that we want to display through our travel search partners based on customers’ interest. The motivation is to increase the attractiveness of our offer. This session will be an opportunity for the attendees to follow the whole life-cycle of a Kiwi.com ML project – from the original business problems to the final deployment of our solution taking into account both data science and engineering-related issues. We will start by defining expected goals and wonder how this translates into a ML problem. The latter will involve proper metrics definition (what does an attractive itinerary actually mean in terms of data?) model design (ML classification on whether a given itinerary is attractive) and evaluation approach that is used to guarantee the solution’s added value. We will follow by introducing the dataset on our itineraries and we will go through the featurisation process together.
Large Language Models (LLMs) represent a remarkable advancement in artificial intelligence (AI) boasting the capability to generate and comprehend human language. Derived from extensive training on vast text and code datasets these models excel in a variety of tasks such as translation summarization and question answering. However a major limitation arises when these LLMs predominantly trained on English data are applied to other languages particularly smaller languages like Czech. Notable models like ChatGPT Bard and Claude exhibit proficiency in Czech with minimal grammatical and stylistic errors. Yet many contemporary open-source LLMs influenced heavily by English-centric datasets fail to address even basic Czech queries.So what are the choices? At Monitora our initial experiments with ChatGPT for text summarization have now transitioned to Llama2 7B primarily due to privacy considerations.

Finetuning Open-Source LLMs to small languages

Petr Simecek, Mediaboard

Building OpenAI Plugins: Deep Dive into Microsoft Semantic Kernel (SK)

Daniel Costea, European Agency
Microsoft Semantic Kernel (SK) is a new technology that enables the integration of AI Large Language Models (LLMs) like GPT-3.5-Turbo GPT-4 and DALL-E 3 from OpenAI or Azure OpenAI with conventional programming languages like C# Python and Java. SK brings together several key components to provide planning and execution capabilities. These components include a robust kernel that provides the foundation for all other components plugins (formerly known as skills) for performing specific tasks connectors for interfacing with external systems memories for storing information about past events steps for defining individual actions and pipelines for organizing complex multi-stage plans. In this hands-on workshop we explore how to build various plugins for: – building a semantic interface for an existing API using plugins and execution plans containing semantic and native functions. – building a GPT-powered chat enriched by real-time information and memories enhanced through RAG (Retrieval-Augmented Generation) capabilities.
Scroll to Top