Evaluating conversational Large Language Models (LLMs) is critical for ensuring their utility, reliability, and safety. Over the years, researchers have developed various methodologies to assess these models, each tailored to specific performance dimensions. Here, we examine the most common approaches to conversational LLM evaluation, highlighting their strengths and limitations.
Automated Metrics
Automated metrics offer quick and scalable ways to evaluate LLMs. These methods compare generated responses against ground-truth data or rely on statistical and semantic properties of language.
Fine-tuning large language models (LLMs) such as LLaMA and T5 can produce impressive results, but the memory and hardware required for traditional 16-bit fine-tuning can be a major obstacle. A new method called QLoRA (Quantized Low-Rank Adapter) changes that, enabling efficient fine-tuning on large models using much less memory. This article simplifies the core concepts behind QLoRA, how it utilizes quantization, and how it allows for high-performance model customization on a single GPU.
What is QLoRA?
QLoRA is a method that allows fine-tuning of quantized models using Low-Rank Adapters (LoRA), making it possible to achieve high performance with a fraction of the typical memory usage. By freezing the original 4-bit quantized model and backpropagating gradients only through lightweight LoRA adapters, QLoRA reduces the memory needed to fine-tune a large model, like one with 65 billion parameters, from over 780GB to under 48GB. This makes it possible to fine-tune large models on a single GPU.
How Does QLoRA Work?
QLoRA introduces three major innovations that enable efficient tuning of quantized models without sacrificing performance.
https://www.dndlab.org/wp-content/uploads/2023/10/DDL_Logo-1-300x98.png00Mehedi Hasanhttps://www.dndlab.org/wp-content/uploads/2023/10/DDL_Logo-1-300x98.pngMehedi Hasan2024-10-31 06:18:302024-10-31 06:18:32Efficiently Fine-tuning Large Language Models with QLoRA: An Introductory Guide
Time series analysis is crucial in various fields, from predicting stock market trends to forecasting weather patterns. However, simply building a time series model isn’t enough; we need to ensure that the model is accurate and reliable. This is where validation comes in. Validation is evaluating how well a model performs on unseen data, ensuring it can generalize beyond the data it was trained on. For time series models, validation is especially important because the data is often dependent on time, and traditional validation techniques like train-test splits may not be suitable due to the sequential nature of the data. In this blog post, we’ll explore Walk Forward Validation, one of the powerful techniques for evaluating time series models.
Why Do We Need Validation in Time Series Models?
Imagine you’re building a model to predict tomorrow’s temperature. You can’t just randomly split your data into training and testing sets like regular data. Why? Because time series data has a natural order, and that order matters! Today’s temperature is influenced by yesterday’s temperature, not next week’s temperature.
So we need validation that can help us in the following ways:
Ensure our model works well on unseen data
Avoid overfitting (when a model learns the noise in the training data)
Simulate real-world conditions where we make predictions using only past data.
Why Walk Forward Validation?
To answer this query, we need to explore some of the most common and widely used validation techniques. Understanding these methods will help us grasp the scenarios in which each technique is suitable and why and when Walk Forward Validation might be the best choice. Below, we have listed these popular validation methods along with relevant details.
https://www.dndlab.org/wp-content/uploads/2023/10/DDL_Logo-1-300x98.png00Istiaq Ahmed Fahadhttps://www.dndlab.org/wp-content/uploads/2023/10/DDL_Logo-1-300x98.pngIstiaq Ahmed Fahad2024-10-28 13:57:272024-10-31 06:21:49Understanding Walk Forward Validation in Time Series Analysis: A Practical Guide
Large Language Models (LLMs) are becoming increasingly strong, but they also demand more computing power and energy. Researchers have created BitNet and its supporting framework, bitnet.cpp, to tackle these obstacles, providing a more intelligent approach to executing these models. In this article, we will explain the purpose of this innovative technology and how it can be advantageous for all individuals, particularly those utilizing AI on their personal devices.
What is BitNet?
BitNet is a form of LLM that operates with data at either 1-bit or 1.58-bit accuracy. This means it saves and processes compressed data formats rather than high-precision numbers. Consider it as shorthand writing, conveying the same message with fewer symbols. Lower precision enables faster model performance and reduced energy consumption without compromising output quality.
What is bitnet.cpp?
bitnet.cpp is the program structure created to effectively operate these 1-bit LLMs on common devices, such as laptops and desktops. The structure enables big models to run on standard CPUs instead of needing costly GPUs. This simplifies the use of AI on local devices, including those not designed for machine learning.
Why Should You Care About 1-Bit AI?
Efficiently operating LLMs offers a number of advantages such as-
Quicker AI replies – Say goodbye to waiting for lengthy calculations.
Energy conservation – Beneficial for mobile devices like laptops and phones, especially important for extending battery longevity.
On-device AI – No need for cloud dependence to operate complex models, improving privacy and accessibility.
https://www.dndlab.org/wp-content/uploads/2023/10/DDL_Logo-1-300x98.png00Mehedi Hasanhttps://www.dndlab.org/wp-content/uploads/2023/10/DDL_Logo-1-300x98.pngMehedi Hasan2024-10-23 06:57:312024-10-23 07:02:14Making Large Language Models Faster and More Energy Efficient with BitNet and bitnet.cpp
Reproducibility is a cornerstone of scientific progress, ensuring that research results can be replicated and verified by others, as well as by the original researchers at a later time. By breaking down research practices into distinct modules, researchers can address key challenges that arise in achieving reproducibility. This blog covers essential modules for ensuring reproducibility: package and environment management, reproducible data pipelines, version control for code, reports, and data, reproducible output, and the advantages of plain text formats.
1. Package Version and Environment Management
Managing package versions and environments ensures that your code runs identically across different systems, independent of updates or changes in dependencies. Tools such as Mamba, Pixi, and UV allow researchers to create isolated environments that lock down specific versions of software libraries and tools.
Why Environment Management?
Reproducibility breaks down when code that once worked suddenly fails due to updated dependencies. By using environment managers, you can “freeze” the environment configuration so others can recreate it exactly.
Example: Using Mamba for Environment Management
# Create an environment with specific package versions
mamba create -n research_env python=3.9 numpy=1.21 pandas=1.3 matplotlib=3.4
# Activate the environment
mamba activate research_env
# Export the environment configuration
mamba env export > environment.yml
# Recreate the environment from YAML
mamba env create -f environment.yml
By sharing the environment.yml file, anyone can recreate the same environment, ensuring that code runs identically.
2. Reproducible Data Pipelines
After setting up the environment, the next step is ensuring that data processing pipelines produce consistent results. A reproducible data pipeline ensures that given the same raw data and the same processing steps, the outputs will be the same.
Why Reproducible Data Pipelines?
Data manipulation, cleaning, and transformation steps can introduce variability if not well-structured. Using pipeline automation tools such as Snakemake or Nextflow allows researchers to define and automate the data processing workflow.
https://www.dndlab.org/wp-content/uploads/2023/10/DDL_Logo-1-300x98.png00Md. Aminul Islam Shazidhttps://www.dndlab.org/wp-content/uploads/2023/10/DDL_Logo-1-300x98.pngMd. Aminul Islam Shazid2024-10-15 09:51:282024-10-28 14:41:40A Modular Approach to Reproducible Research
Chatbots have rapidly become popular in recent times. These AI-based applications are easy to build yet very effective for improving customer service. Looking at the work AI, one might think that building such application must be quite challenging and costly. Talking from the R&D perspective, it is challenging and costly indeed. However, if we are thinking about a smaller scale, or something to have a hands-on-experience, then Botpress makes it every easy for us. Botpress is a website where we can create instant chatbots. The process is very simple and easy-to-understand how chatbot works. We just need to drag and drop components. Let’s get started with Botpress.
1. First we need to sign into the website. There are multiple options to sign in with. We can sign in with GitHub, Google, Microsoft, or LinkedIn. Apart from these, even we can create accounts with an email.
2. We will have our dashboard once we sign in. Now we can click on the Create Chatbot button to start developing the chatbot.
3. We will be given a few templates. Initially, we will have only one template available that comes with some existing components and instructions. Later we will be able to choose blank templates.
https://www.dndlab.org/wp-content/uploads/2023/10/DDL_Logo-1-300x98.png00Mehedi Hasanhttps://www.dndlab.org/wp-content/uploads/2023/10/DDL_Logo-1-300x98.pngMehedi Hasan2024-10-12 07:08:062024-10-12 07:08:10Build Your First Chatbot Effortlessly with Botpress: A Step-by-Step Guide
Responsible AI is a framework that emphasizes the development of AI technologies in a way that respects ethical principles, societal norms, and individual rights. Here’s a beginner’s guide for AI researchers looking to integrate responsible AI principles into their work.
Understand the Principles of Responsible AI
The first step is to familiarize yourself with the core principles of responsible AI. These typically include fairness, transparency, accountability, privacy, and security. Understanding these principles will help you to consider the broader implications of your work and ensure that your research contributes positively to society.
Fairness: AI systems should be free from biases and should not discriminate against individuals or groups.
Transparency: The workings of AI systems should be open and understandable to users and stakeholders.
Accountability: AI researchers and developers should be accountable for how their AI systems operate.
Privacy: AI systems must respect and protect individuals’ privacy.
Security: AI systems should be secure against unauthorized access and malicious use.
Engage with Interdisciplinary Research
AI research does not exist in a vacuum; it intersects with numerous fields such as ethics, law, sociology, and psychology. Engaging with interdisciplinary research can provide valuable insights into the social and ethical implications of AI, helping you to design technologies that are not only innovative but also socially responsible. Collaborate with experts from these fields to gain a broader perspective on the impact of your work.
Adopt an Ethical Framework
Developing or adopting an ethical framework for your research can guide your decision-making process and help ensure that your work aligns with responsible AI principles. This could involve conducting ethical reviews of your projects, considering the potential societal impact of your research, and implementing guidelines for ethical AI development.
Prioritize Privacy and Security
Given the increasing amount of personal data being processed by AI systems, prioritizing privacy and security is essential. This means implementing robust data protection measures, ensuring data anonymization where possible, and developing AI systems that are resilient to attacks and unauthorized access.
Foster Transparency and Explainability
Work towards making your AI systems as transparent and explainable as possible. This involves developing techniques that allow others to understand how your AI models make decisions, which can help build trust and facilitate the identification and correction of biases.
Engage with Stakeholders
Engage with a broad range of stakeholders, including those who may be affected by your AI systems, to gather diverse perspectives and understand potential societal impacts. This can help identify unforeseen ethical issues and ensure that your research benefits all sections of society.
Continuous Learning and Adaptation
The field of AI and the societal context in which it operates are constantly evolving. Stay informed about the latest developments in responsible AI, including new ethical guidelines, regulatory changes, and societal expectations. Be prepared to adapt your research practices accordingly.
Conclusion
Integrating responsible AI principles into your research is not just about mitigating risks; it’s about leveraging AI to create a positive impact on society. By prioritizing ethics, engaging with interdisciplinary research, and fostering transparency and stakeholder engagement, you can contribute to the development of AI technologies that are not only advanced but also aligned with the greater good. The journey of becoming a responsible AI researcher is ongoing and requires a commitment to continuous learning and adaptation.
Here are some interesting papers that can help you ponder:
https://www.dndlab.org/wp-content/uploads/2023/10/DDL_Logo-1-300x98.png00labweb@adminhttps://www.dndlab.org/wp-content/uploads/2023/10/DDL_Logo-1-300x98.pnglabweb@admin2024-02-22 06:46:432024-05-19 06:08:11Principles First: Integrating Responsible AI into Your Research
MidJourney is an innovative platform that pioneers AI-powered creativity, pushing the boundaries of image generation and artistic expression. The upcoming release of Version 6 (V6) and the new Alpha website have the community eagerly anticipating the future.
During a recent office hours session, MidJourney shared updates on the Alpha website, mobile compatibility, V6, and enhanced features. The Alpha website showcases promising functionality in image creation and is accessible to exclusive 10K Club members, with plans for a gradual rollout to broader audiences.
MidJourney is working on enhancing the mobile experience with the development of a probable web app for Android and a native iOS app, and welcomes collaboration from individuals skilled in Native Android development.
V6 promises to revolutionize the image generation process with natural language inputs for a more intuitive user experience and enhanced features, including updated describe feature, style anchoring, and a next-gen style tuner. However, V6 may be more expensive per image due to optimizations present in its predecessor, V5.
MidJourney’s commitment to pushing the boundaries of AI artistry is evident in its ongoing developments and community engagement. With V6 on the horizon and the Alpha website paving the way for new possibilities, the MidJourney community is on the brink of a new era in AI artistry.
https://www.dndlab.org/wp-content/uploads/2023/10/DDL_Logo-1-300x98.png00labweb@adminhttps://www.dndlab.org/wp-content/uploads/2023/10/DDL_Logo-1-300x98.pnglabweb@admin2024-02-21 17:53:022024-05-19 06:08:53MidJourney V6: Exploring the Boundaries of AI Artistry
Bangladesh is striving to close the gap in technology and education, and innovative approaches to learning are more crucial than ever. With recent shifts in the national curriculum, there’s a growing recognition of the need to integrate advanced technologies and better digital content into educational practices. At the forefront of this transformative wave is the Smart Learning Platform project, which has embarked on an ambitious project with the Liberation War Museum (LWM) to redefine the educational landscape. This initiative not only aligns with the country’s educational reforms but pushes the boundaries further by integrating Artificial Intelligence (AI) into learning, setting a new precedent in the fusion of technology and education.
The project’s inception was rooted in the idea of using digital media to bring the Liberation War of Bangladesh closer to the young minds of the country. Targeting Class 6 and Class 7 students under the Bangladesh National Curriculum, the initial objective was to develop engaging video content that would showcase the museum’s wealth of information, artifacts, and historical documents. The aim was to ignite a spark of interest and understanding about this pivotal period in the nation’s history.
When DnD Lab was handed over this project, we envisioned a more interactive and immersive learning experience. Our proposal to integrate an AI system into the learning platform marked a significant leap from the conventional methods of history education. This innovative approach was designed to transform passive content consumption into an interactive, engaging process.
The integration of AI was multifaceted. One aspect involved interactive question formulation following the QuBAN (Query-Based Access to Neurons) method and multiple-choice questions embedded within the content, allowing students to actively engage and interact as they learned. This feature was not just about testing knowledge but about encouraging students to think critically and seek answers, with AI guiding them subtly rather than providing outright solutions.
Beyond individual learning, DnD Lab’s system encouraged group activities and home-based participation, expanding the learning environment beyond the traditional classroom. An AI chatbot was also introduced, serving as a virtual guide and assistant, helping students navigate through their educational journey.
Teachers were not left behind in this digital revolution. DnD Lab equipped educators with AI-driven tools for monitoring student performance and identifying areas for improvement. The AI system provided actionable insights and recommendations, enabling teachers to optimize their teaching strategies. Moreover, the system’s analytics capabilities allowed for a comprehensive overview of class performance, simplifying the assessment process.
Gamification elements were also a critical component of this project. The introduction of symbolic scoring systems, leaderboards, and rankings injected a sense of competition and achievement into the learning process, motivating students through playful yet educational challenges.
The project gained significant public attention following a press conference organized by LWM, where it was highlighted in major Bangladeshi media outlets like Daily Star, Prothom Alo, and Ekattor News. The initiative’s impact was further amplified through a Facebook live presentation on the Bangladesh Liberation War Museum’s page.
The project was brought to life by a team of dedicated individuals, including LWM Trustee Dr. Sarwar Ali, Trustee Mofidul Haque, Trustee, and Member Secretary Sara Zaker, along Dr. Moinul Islam Zaber, a Professor from the Computer Science and Engineering Department of Dhaka University, and Senior Lecturer Md. Abu Sayed from Independent University Bangladesh. The DnD Lab Team comprises students from various backgrounds including Khandoker Ashik Uz Zaman (Researcher and Developer), Ahsan Habib Nahid (Web Developer), Amit Roy (Content Creator), Md. Mehedi Hasan (AI Developer), and Abir Chakraborty Partha (AI Developer), played a pivotal role under the guidance of Dr. Moinul Islam Zaber and Md. Abu Sayed.
The Liberation War Museum’s Smart Learning Platform project by DnD Lab stands as a beacon of innovative educational practices in Bangladesh. It exemplifies how technology, particularly AI, can be harnessed to make learning history a more engaging, interactive, and effective process. This initiative not only honors the past but also paves the way for a future where education is enriched through the power of technology.
https://www.dndlab.org/wp-content/uploads/2023/10/DDL_Logo-1-300x98.png00labweb@adminhttps://www.dndlab.org/wp-content/uploads/2023/10/DDL_Logo-1-300x98.pnglabweb@admin2024-01-10 07:50:312024-05-19 06:09:36Data & Design Lab Collaboration with Liberation War Museum for Smart Education Platform and Digital Content for National Curriculum