← Back to blogUnderstanding LLM Parameters

Understanding LLM Parameters

Dec 26, 2023.

More and more models with low parameter-counts keep on beating models with higher counts. That made me realize I didn't fully understand what parameters are about and how they translate in practice to better performances.

This article is the result of my attempt to understand a bit more about one of the key aspects of LLMs as someone working with them daily but with no machine-learning background.

If you already have solid knowledge in defep learning, this article is not for you.

Basic Concepts

Parameters, in-fine determine how the model processes and generates language, with each one contributing to the decision-making process of the AI.

Parameters can be thought of as the 'knowledge bits' that models uses to understand and generate human language.

Just like neurons in the human brain connect to form a network and process information, in LLMs, parameters (weights and biases) are connected in a neural network. These connections are fine-tuned during the training process.

Some models:

  • Mistral 7B: 7B model, outperforms Llama 2 13B on all benchmarks and Llama 1 34B on many benchmarks.
  • GPT-4: ~1.8 trillions params across 120 layers.
  • Falcon Models: These models, while I don't have specific parameter counts, are also part of the evolving landscape of LLMs, each with their unique architectures and parameter scales.

Building a 10-parameters model

Let's build a hyper-simple neural network model with exactly 10 parameters using TensorFlow and Keras:

import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense
import numpy as np
# Define the model
model = Sequential([
Dense(2, input_shape=(4,), activation='relu')
])
print(model.summary())

This code creates a simple neural network with one dense layer. The layer has 2 neurons, and we're assuming an input shape of 4. The total number of parameters is calculated as (input_shape * neurons) + biases. In this case, it's (4 * 2) + 2 = 10. When you run this code, the model.summary() function will display the structure of the model, including the total count of parameters, confirming that there are indeed 10 parameters in the model. ​​

# Compile the model
model.compile(optimizer='adam', loss='mean_squared_error')
# Generating some random data for demonstration
# Let's create a dataset with 100 samples, each with 4 features
X_train = np.random.random((100, 4))
# And some random target values (since this is just a demonstration, the targets don't have to make sense)
y_train = np.random.random((100, 2))
# Train the model
model.fit(X_train, y_train, epochs=10)
# After training, you can use model.predict() to make predictions on new data
X_new = np.random.random((10, 4)) # New data for prediction
predictions = model.predict(X_new)
print(predictions) # Viewing the predictions

In this code:

Compile the Model: Before training, the model needs to be compiled. This is done using model.compile(), where you specify the optimizer and loss function. Here, we use the Adam optimizer and mean squared error for loss, which are common choices for many tasks.

Generate Training Data: For demonstration, we generate some random data (X_train) and corresponding target values (y_train). In a real-world scenario, this data would come from your dataset.

Training the Model: Use model.fit() to train the model on your data. You can specify the number of epochs, which is how many times the model will go through the entire training dataset.

Making Predictions: After training, use model.predict() to make predictions on new, unseen data. Remember, this is a very basic example for demonstration purposes. In a practical application, you would use a well-structured dataset, and the model's complexity would depend on the task at hand. Also, the performance of this model is not indicative of real-world usage, as both the model and the data are overly simplified.

Layers

At the core of an LLM is a neural network, structured in layers.

Imagine these layers as a series of filters (or stages) through which information passes and gets refined.

There are three primary types of layers:

  • Input Layer: Where the model receives its input, such as a sentence or a phrase.
  • Hidden Layers: Where the actual processing happens.
  • Output Layer: This produces the final result, like the next word in a sentence.

wandb

Each layer consists of numerous nodes (neurons), and each node is connected to nodes in the previous and next layers.

The parameters (weights and biases) associated with these connections define how one node influences another.

As data passes through each layer, it undergoes transformations based on the layer’s parameters. Initially, raw data is processed into more abstract and useful forms, layer by layer.

In this network, parameters are the connectors between layers. They decide how much influence one layer has on the next. For example, in language models, they determine how a given word in a sentence influences the prediction of the next word.

During training, these parameters are adjusted based on the model's performance. If the model incorrectly predicts a word, the error is used to adjust the parameters, improving future predictions.

The number of layers (depth) and the number of nodes in each layer (width) contribute to the model's ability to understand complex patterns. More layers and nodes generally mean more parameters, enhancing the model's learning capacity.

Advanced Concepts:

  • Backpropagation: This is the process used during training to adjust parameters. It involves calculating the gradient of the error with respect to each parameter and adjusting them to minimize the error.

  • Activation Functions: These are functions applied at each node that decide whether it should be activated or not, based on the transformed input received. Common functions include ReLU (Rectified Linear Unit) and Sigmoid.

  • Regularization Techniques: To prevent overfitting, techniques like dropout are used, where randomly selected neurons are ignored during training. This ensures that the model does not become overly reliant on any single neuron and generalizes better.

Weights and Biases

In a neural network, weights and biases (referred to as w and b) are the fundamental elements that determine how input data is transformed. Parameters are essentially the weights and biases.

wandb

Weights help in understanding the relationship between different words and phrases.

Biases are additional terms added to the weighted sum, allowing the model to better fit the data. They act as an offset or adjustment, ensuring that even when all inputs are zero, there can still be a non-zero output.

Initially, weights and biases are typically set to small random values. This randomness helps break symmetry and ensures that all neurons initially learn different things.

For example, consider a simple neural network with one input layer, one hidden layer, and one output layer. The weights and biases might be initialized as follows:

weights = {
'input_to_hidden': np.random.rand(input_size, hidden_size) * 0.01,
'hidden_to_output': np.random.rand(hidden_size, output_size) * 0.01,
}
biases = {
'hidden': np.random.rand(hidden_size) * 0.01,
'output': np.random.rand(output_size) * 0.01,
}

In this example, np.random.rand generates random numbers between 0 and 1, and we multiply by 0.01 to keep the initial values small. input_size, hidden_size, and output_size correspond to the number of nodes in the input, hidden, and output layers, respectively.

Model developers may choose different initialization schemes (like Xavier or He initialization) based on the network architecture and activation functions used, to optimize the training process.

Then, during training, the model is exposed to large datasets. It makes predictions based on its current weights and biases, and these predictions are compared to the actual outcomes.

If the prediction is incorrect, we go through a process called back-propagation to adjust its weights and biases. This involves calculating the gradient (or rate of change) of a loss function (a measure of prediction error) with respect to each parameter.

backpropagation

The model then uses algorithms like gradient descent to adjust the parameters in a direction that minimizes the error. This gradually refines the model’s understanding and decision-making capabilities.

Parameters are stored as matrices or tensors (multi-dimensional arrays), with each layer of the network having its own set of weights and biases.

The training process, involving constant updates to these parameters, is computationally intensive, requiring powerful GPUs or specialized hardware.

As the model encounters more data, it continually adapts its parameters, refining its language understanding and generation capabilities.

An issue that can arise is overfitting, where the model becomes too specialized to the training data. We try to avoid this through regularization techniques like dropout, L1/L2 regularization, or early stopping.

Training Data

Training data for LLMs often comes from a wide range of internet sources, including books, websites, articles, forums,and pre-existing datasets such as the Wikipedia corpus or Common Crawl.

The collected data often contains irrelevant or unwanted information (like HTML tags, non-textual elements). Cleaning involves removing these elements to retain only the meaningful text.

The data is normalized: this includes standardizing text, like converting to a uniform character set, fixing encoding issues, or standardizing date and number formats.

The cleaned and filtered text is broken down into tokens (words, subwords, or characters) using tokenization algorithms. These tokens are then transformed into numerical representations (embeddings) that can be processed by the neural network.

In some cases, data augmentation techniques are used to expand the dataset artificially. This includes techniques like paraphrasing, back-translation, or synthetic data generation.

Challenges in Training High-Parameter Models:

  • Computational Resources and Energy Consumption:
  • Hardware Requirements: Training models with billions of parameters necessitates an array of high-performance GPUs or specialized hardware like TPUs. The cost associated with this hardware is substantial.
  • Energy Demands: The energy consumption for training and operating such models is considerable, contributing to high operational costs and environmental impacts.
  • Data Scalability and Management:
  • Extensive Data Needs: Large models require massive and diverse datasets for training. Gathering, storing, and processing such extensive data is challenging and resource-intensive.
  • Quality and Bias in Data: Ensuring high-quality, unbiased data is critical. Poor quality or biased data can lead to a model that reinforces existing prejudices or inaccuracies.
  • Training Time:
  • Large models can take weeks or even months to train, which can be a significant bottleneck in the development and iterative improvement cycle.
  • Hyperparameter Tuning and Overfitting: Determining the optimal settings for various hyperparameters (like learning rate, batch size) is a complex and time-consuming task.

With a high number of parameters, there's a greater risk of the model fitting too closely to the training data (overfitting), which can hamper its performance on new, unseen data.

Understanding Underfitting in Smaller Models:

  • Inadequate Learning Capacity:
  • When a model with too few parameters is exposed to a vast amount of data, it lacks the capacity to learn and capture the complexity in the data. This is analogous to trying to understand a complex subject with a very basic textbook.
  • Manifestations of Underfitting:
  • An underfitted model performs poorly, not just on unseen data but often also on the training data itself, failing to grasp the underlying patterns and nuances.
  • Addressing Underfitting:
  • Solutions include increasing the number of parameters (adding layers/nodes), choosing more sophisticated models, or simplifying the training data to match the model's capacity.

Specific Challenges with New High-Parameter Models:

  • Computational Demands:
  • Training new, large models requires balancing computational efficiency with model complexity. It involves optimizing the model architecture and training algorithms for better performance with available hardware.
  • Model Tuning:
  • Each model may require unique tuning strategies. For instance, adjusting learning rates dynamically during training or experimenting with different types of regularization to avoid overfitting.
  • Cost-Benefit Analysis:
  • A significant consideration is whether the incremental gains in model performance justify the additional computational costs and complexities.
  • Deployment Considerations:
  • The scalability and maintenance of large models are also critical challenges. Ensuring that these models can be efficiently deployed and updated in various environments is a key consideration.

Parameters are Not Everything

While a larger number of parameters can enhance a model's capabilities, they are not the sole determinant of its effectiveness.

As we've seen in the beginning, models with fewer parameters keep on outperforming their larger counterparts.

Innovations like the Transformer architecture, which relies on self-attention mechanisms. This allows models to focus on relevant parts of the input data, improving efficiency and accuracy.

Techniques like Sparse Transformers use a subset of the total possible connections, reducing computational load while maintaining performance.

There are also a lot of improvements in Training techniques, such as:

  • Transfer Learning: Models are first trained on a large dataset (pre-training) and then fine-tuned on a specific task with a smaller dataset. This approach leverages general knowledge and adapts it to specific tasks.
  • Data Augmentation: In NLP, techniques like back-translation or text generation can expand the training dataset, providing more diversity and helping the model generalize better.

Regularization and Optimization

  • Dropout: Randomly dropping units from the neural network during training prevents over-reliance on certain paths, promoting generalization.
  • Learning Rate Schedulers: Adjusting the learning rate during training (like using a warm-up phase) can lead to more stable and effective training.

Model Compression and Quantization

  • Model Pruning: Removing redundant or less important parameters without significantly affecting performance can make models more efficient.
  • Quantization: Reducing the precision of the parameters (e.g., from 32-bit floating points to 8-bit integers) can decrease the model size and speed up inference, with minimal loss in accuracy.

Emerging research directions:

  • Capsule Networks: Aiming to improve how models understand spatial hierarchies and relationships in data.
  • Energy-Aware Architecture Design: Creating models that are optimized for energy efficiency, especially important for deployment in resource-constrained environments.
  • Neural Architecture Search (NAS): Automated methods for designing network architectures, potentially finding more efficient models that human designers might miss.
  • Federated Learning: Training models across multiple decentralized devices while keeping data localized, enhancing privacy and data diversity.

In summary, while the number of parameters in a model is important, it's not the only factor that determines its effectiveness. Innovations in training techniques, architecture, data management, and model optimization play major roles in enhancing capabilities of LLMs.

Researchers continue to explore a variety of approaches to improve model quality without solely relying on increasing parameter counts.

Are you building an AI product?

Lunary is an open-source platform to monitor, manage and improve your AI app.
Open Source
Self Hostable
Evaluations
Alerts
Public API
Exports
Prompt Templates
Chat Replays
Agent Tracing
Metrics
Feedback Tracking
LangChain Support
Open Source
Self Hostable
Evaluations
Alerts
Public API
Exports
Prompt Templates
Chat Replays
Agent Tracing
Metrics
Feedback Tracking
LangChain Support