Funda AI - building a laptop powered by AI to help students in Africa learn

I am building an AI school for kids in Africa. This is a laptop connected to an ecosystem of educational tools that interact with offline AI fine-tuned for education. This will help kids improve in exams and build logical and critical thinking skills to build future proof careers.

The best way to overcome infrastructural barriers is to leapfrog them. I am building this MVP for my nephew in Zimbabwe. In Zimbabwe like much of Africa, the internet is either expensive, slow or there are barriers in place; such as intermittent electricity that limit overall access to the internet. I want to solve that for kids with a propensity for self-learning across the continent.

Laptops for kids

Each subscribed student, between 8 and 17 gets a laptop.

The laptop is locked down to prevent them from accessing inappropriate sites. There are limitations on what they can install and from what source.

Starting the laptop

The main app store it is connected to, is the Funda AI Hub. This is an ecosystem of desktop apps, all built for different purposes. All these tools are offline first.

Funda AI Hub EdTech App Store

The FundaAI Hub EdTech App store contains a library of educational apps and curated books. The apps will cover the span of; helping children study for exams to learning foundational skills to start building careers in tech, learn financial management skills and build critical thinking skills.

The Funda AI Hub EdTech App Store

The first two apps built into the library are; The Examiner and the AI Reader.

The Examiner

The Examiner is an app that creates mock exams for different subjects based on the child’s curriculum, giving the child feedback on their attempts. This helps students focus on the most frequently topics in exams to improve their exam grades.

AI Reader

The AI reader is a book specific AI tutor. Whatever book the student opens, loads up an interface giving kids access to an AI tutor specifically fed the book opened.

Kids can learn about anything, while having their personal AI guiding them through the content, tailored to their pace of learning, without requiring internet access.

The Engineering Side

There are three main components I have been building;

  1. The “interactables”. This includes the app store and the apps I am building into the laptops. These are desktop apps intended to run on machines running on Linux Mint

  2. “Parental” Control. You can read more detail about this in the blog linked. In a nutshell this involved configured Linux to blacklist a list of constantly updated sites and disabling commands that would enable the student to install unauthorized apps or alter the system

  3. Building the pipeline/infrastructure. This is everything happening behind the scenes to get things working

The 4th element; fine-tuning AI based on the student’s curriculum and integrating it fully into existing apps is the next step.

AI Reader: The Engineer’s Summary

There are many tools that open PDFs and automatically carry out vector embedding, feeding the books into AI (ChatGPT4All, Jan.ai etc…). I couldn’t use these because of hardware limitations. I am running this MVP on a 2013 8GB Macbook Air. The idea is to keep the hardware low cost enough to be affordable to my target audience, while giving students access to a laptop with a long enough battery life to bypass issues related to intermitted electricity. I can get a llama 3.2 1B model running relatively well, however, the hardware proved incapable of successfully handling pre-existing tools mentioned.

I built something with a lot less frills; I built a system that automatically detects when a PDF is opened in Xreader - basically a linux script, that launches a streamlit built AI chat interface.

Internally, a subprocess is used to interface with Linux directly, a command runs to get a list of all running processes, listening specifically for when a PDF is opened. When this happens the AI GUI opens. AI has not been integrated into this yet.

The Examiner: The Engineer’s Summary

The Examiner, takes in a set of prompts, rendering mock exams questions generated by the local AI model fine tuned through past papers specific to the student’s curriculum. There is a timer timing the response for each question based on the question’s perceived complexity. This covers multiple subjects depending on the child’s pre-selected level/grade

The App Store: The Engineer’s Summary

The App store contains three components; micro-services if you are into that phrasing. There is an authenticator that validates whether the request is coming from the admin (me), a subscribed user or a public user. We use a unique hardware identifier (specifically; a combination of unique system information that can be acquired without requiring the user to elevate permissions to an admin).

The user access the app store; a desktop app, they are authenticated and based on their request they have access to a set of commands and actions controlled by the backend.

The Book Library: The Engineer’s Summary

I thought about building the “book library system” through these questions;

  1. Who will be uploading books initially - Me

  2. How do I intend to acquire and upload the books - I buy PDFs and upload them on my Google Drive

In order to build this out, I built out modules I deployed to the cloud to always keep them running. One module connects to my Google Drive and checks if there are any new books. It does this by comparing the books listed in a database (containing data regarding all books) with the books on Drive. New books are written to the database.

Another module regularly runs through the database to find books that have not been embedded. When these books are found, it queues them into a pipeline vector embedding the books. I am using Nomic Embed v1.5 and Modal for compute. The text is extracted, hierarchical chunking is carried out to preserve the contextual structure of the book and a two level embedding is carried out (Chapter and chunk level).

A download link is generated and written into the database. When the user interacts with the App Store, they download both the book and the vector embedding that will be used by their AI Reader - This process still has to built out (specifically integrating AI).

Next Steps

With the recent launch of DeepSeek-R1, I may consider swapping out llama 3.2 1B for Deep-Seek-R1’s 1.5B model. I will need to fine-tune the models, fully integrate them. Look at ways to track laptop usage and then ship to out to my first user

I’m always open for collaboration, especially around distribution, logistics and fine-tuning AI or just chatting about tools I can built to helps build the next generation of Africa’s tech leaders.

My email is always open: