Blog

Go to the profile of Brandon Morelli
Bhimesh Chauhan

12th January, 2020 - Boston, MA

Learn and Understand Recursion in JavaScript

I’ll walk you through two popular JS recursion examples in 10 minutes so you can finally understand how recursion works in JavaScript.

blog

What is Recursion?

Recursion is simply when a function calls itself.

Lets jump right in and take a look at probably the most famous recursion example. This example returns the factorial of a supplied integer:

 function factorial(x) {
if (x < 0) return;
if (x === 0) return 1;
return x * factorial(x - 1);
}

factorial(3);
// 6

Woah. It’s Okay if that makes no sense to you.The important part is happening on line 4:return x * factorial(x — 1);. As you can see the function is actually calling itself again (factorial(x-1)) but with a parameter that is one less than when it was called the first time. This makes it a recursive function.

Before I break down that code example any further, it’s important you understand what factorials are.

To get the factorial of a number you multiply that number by itself minus one until you reach the number one.

Example 1: The factorial of 4 is 4 * 3 * 2 * 1 or 24.

"Example 2: The factorial of 2 is just 2 * 1 or "2.

Awesome now that our High School Math lesson is over we can return to the good stuff!

The three key features of recursion

All recursive functions should have three key features:

A Termination Condition

Simply put: if(something bad happend)[ STOP }; The Termination Condition is our recursion fail-safe. Think of it like your emergency brake. It’s put there in case of bad input to prevent the recursion from ever running. In our factorial example if(x < 0) return;is our termination condition. It’s not possible to factorial a negative number and thus we don’t even want to run our recursion if a negative number is input.

A Base Case

Simply put: if(this happens) [ Yay! We're done }; The Base Case is similar to our termination condition in that it also stops our recursion. But remember the termination condition is a catch-all for bad data. Whereas the base case is the goal of our recursive function. Base cases are usually within an if statement. In the factorial example if(x === 0) return 1; is our base case. We know that once we’ve gotten x down to zero we’ve succeeded in determining our factorial!

Simply put: Our function calling itself. In the factorial examplereturn x * factorial(x — 1); is where the recursion actually happens. We’re returning the value of the number x multiplied by the value of whateverfactorial(x-1)evaluates to.

Go to the profile of Brandon Morelli
Bhimesh Chauhan

22th July, 2024 - Toronto, CA

Implementing Retrieval-Augmented Generation (RAG) with LLMs

In this post, we’ll explore how RAG enhances Large Language Models (LLMs) by integrating real-time data retrieval, using simple examples to illustrate the process.

blog

What is Retrieval-Augmented Generation (RAG)?

RAG combines the power of LLMs with external knowledge retrieval.

Instead of relying entirely on pre-trained knowledge RAG retrieves relevant information from a connected source (like a database or API) and augments the response of the LLM in real-time.

 async function getAnswer(query) {
const documents = await retrieveRelevantDocs(query);
const prompt = buildPrompt(query documents);
return await llm.generate(prompt);
}

getAnswer("What are the symptoms of diabetes?");
// Returns accurate and current medical information

This method helps overcome some limitations of LLMs. LLMs may not have up-to-date information but with RAG the model retrieves data on-the-fly from external sources to enhance the response.

Let’s dive deeper into why RAG is useful and how it complements LLMs.

Think of RAG as a way to keep the LLM's answers accurate and relevant even after the model has been deployed.

Example 1: A legal advice chatbot retrieves the latest regulations to provide accurate responses.

Example 2: A medical assistant fetches recent journal publications when asked about new treatments.

These examples show how RAG ensures that AI applications stay relevant in dynamic fields.

How Does RAG Work?

RAG involves three main components:

1. Query and Document Retrieval

When a user makes a request the query is sent to a document store or external API to retrieve relevant data. Think of it as searching through a library to find the best books for your topic.

2. Augmenting the Prompt

The retrieved documents are embedded into the LLM prompt enhancing its response with external knowledge. This step ensures the response is not just based on the model's pre-trained data.const prompt = buildPrompt(userQuery retrievedDocs);

3. Generating the Response

Finally the LLM generates a response based on the augmented prompt producing accurate and contextual answers.return llm.generate(prompt);

RAG ensures that the model provides fact-based answers even when it didn’t originally know the information.

The retrieval step ensures that the AI isn't guessing or 'hallucinating' answers.

Example: Instead of relying on old data a travel assistant fetches live flight statuses to assist users.

This prevents outdated information from slipping through.

Applications of RAG with LLMs

Here are some common use cases where RAG with LLMs shines:

Customer Support Bots

Bots can fetch relevant knowledge base articles to solve user queries faster and more accurately.

Medical Assistants

LLMs with RAG integration pull the latest medical research to provide informed recommendations during consultations.

Legal Research

A legal AI assistant retrieves recent case law or statutes to answer complex legal questions accurately.

Benefits of RAG

1. Real-time Relevance: Ensures that responses are always up-to-date with the latest information.

2. Reduced Hallucination: By grounding responses in real data it minimizes the chances of incorrect or fabricated answers.

3. Enhanced Utility: Makes LLMs suitable for use in dynamic environments like healthcare finance and law.

With RAG applications powered by LLMs are more powerful accurate and trustworthy.

Go to the profile of Brandon Morelli
Bhimesh Chauhan

14th September, 2024 - Toronto, CA

Fine-Tuning LLMs for Domain-Specific Applications

In this blog, I’ll walk you through the basics of fine-tuning LLMs and explain the steps required to make them more effective for specific tasks.

blog

What is Fine-Tuning?

Fine-tuning involves adjusting the parameters of a pre-trained language model on new specialized data.

Pre-trained models like GPT or BERT are trained on large datasets containing general information but they may not perform well on niche topics like medical diagnoses or legal cases. Fine-tuning helps tailor these models to excel in specific domains.

 const fineTuneModel = async (dataset) => {
const model = await loadPretrainedModel("gpt-3.5-turbo");
model.train(dataset { epochs: 5 });
return model;
};
const medicalModel = fineTuneModel(medicalData);
// Model fine-tuned on medical notes

Fine-tuning helps overcome generalization issues. When LLMs are fine-tuned they become more focused and relevant for specific queries improving their performance on tasks related to the target domain.

Let’s dive deeper into why fine-tuning matters for real-world use cases.

Think of fine-tuning as adjusting a generic recipe to suit your specific taste. You retain the original recipe's structure but make changes to align with your personal preferences.

Example 1: A financial assistant chatbot fine-tuned on recent market trends can provide better advice compared to a general-purpose model.

Example 2: A customer support bot tailored to your business responds accurately to common issues and inquiries reducing response time and improving customer satisfaction.

How to Fine-Tune an LLM?

Fine-tuning can be broken down into a few key steps:

1. Data Preparation

Collect and clean your dataset. Ensure the text is properly formatted and labeled if necessary. For example if you're fine-tuning on medical notes each record should contain consistent terminology and structure.

2. Model Loading

Start by loading a pre-trained model like GPT or BERT from a model hub or library.const model = await loadPretrainedModel('bert-base');

3. Training the Model

Feed your specialized data into the model using your preferred framework (e.g. PyTorch TensorFlow). Make sure to adjust hyperparameters like the learning rate to avoid overfitting.model.train(data { batchSize: 16 epochs: 3 });

4. Evaluating and Fine-Tuning

Evaluate the fine-tuned model's performance using metrics like accuracy or F1-score. Continue adjusting the parameters as needed.

Remember: A good fine-tuning process requires careful evaluation to ensure the model generalizes well to unseen data.

Applications of Fine-Tuning

Fine-tuning LLMs opens up a world of possibilities:

Healthcare

A fine-tuned LLM can assist doctors by summarizing patient records and providing recommendations based on the latest research.

Finance

In the finance industry LLMs fine-tuned on stock data can generate accurate reports and forecasts.

Legal Advice

LLMs trained on legal documents can assist in drafting contracts and answering legal questions with precise language.

Key Benefits of Fine-Tuning

1. Increased Accuracy: Models become more reliable for specific tasks and queries.

2. Faster Response: Tailored models provide answers more efficiently improving user satisfaction.

3. Reduced Hallucination: By focusing on relevant data fine-tuned models are less likely to produce misleading information.

Fine-tuning is essential for LLMs to reach their full potential in real-world applications.

This site is hosted on Github | Ⓒ 2022 | Designed with ❤ by Bhimesh