Artificial Intelligence Oct 31 6 min read

How We’re Building Custom AI Models for Commerce at Gygital

By Godswill Azubike
How We’re Building Custom AI Models for Commerce at Gygital

At Gygital, we’re not just writing software, we’re building intelligent systems that can understand how people buy, how sellers sell, and how the whole commerce process connects together.

For us, AI isn’t some fancy buzzword. It’s a real tool we’re using to help businesses make better decisions and serve customers smarter. We’re already training our own custom AI models that power things like recommendations, demand forecasting, fraud detection, and even price optimization across our commerce products.

Why We’re Building Our Own AI Models

Commerce in Nigeria (and Africa generally) is unique. Buyer behaviour, pricing culture, and even how people shop online are very different from what foreign platforms are built for. That’s why we decided to build our own AI models that understands our kind of market.
When you look at commerce data like products, prices, customer searches, order history, chats, delivery times, you’ll see patterns everywhere. But those patterns only make sense when you process them properly. That’s where our data science team (led by my colleague Kessy Okundia who’s studying Machine Learning and Data Analysis in the UK) comes in.
We collect all that data, clean it up, and train models that can do things like:
  • Predict what a customer might want to buy next.
  • Forecast which product might go out of stock soon.
  • Detect suspicious transactions.
  • Recommend price adjustments based on demand and location.
The goal is simple: use AI to help sellers sell more, and buyers find what they want faster.

How We Build Our Models

We mostly work with Python, and our data science environment runs through Jupyter Notebook or Google Colab, depending on what we’re experimenting with. Inside those notebooks, we explore the data, visualize trends, and build models from scratch using libraries like pandas, scikit-learn, and XGBoost.
We start from real data, things like transaction logs, customer sessions, and inventory records. Then we clean, transform, and create new features from them. For example, if a user adds an item to cart and doesn’t check out, that’s a signal. If a product sells fast in one region but not in another, that’s another.
A simple example of what happens in one of our notebooks looks like this:

import pandas as pd
from sklearn.model_selection import train_test_split
from xgboost import XGBClassifier
from sklearn.metrics import roc_auc_score

df = pd.read_csv('transactions.csv')

# feature engineering
df['days_since_last_purchase'] = (pd.to_datetime(df['timestamp']) -
                                  pd.to_datetime(df.groupby('customer_id')['timestamp'].shift())).dt.days.fillna(999)

X = df[['days_since_last_purchase', 'views_in_session', 'price_relative_to_average']]
y = df['converted']

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
model = XGBClassifier()
model.fit(X_train, y_train)
preds = model.predict_proba(X_test)[:,1]

print("AUC:", roc_auc_score(y_test, preds))
We don’t just stop at training. Every model we build passes through proper testing, not only on accuracy, but also on how much business value it actually adds.

From Notebook to Production

Once a model is good enough, we take it out of the notebook and make it usable inside our main systems. That’s where our MLOps process comes in.
We package each model, version it, and deploy it as a small service using Python and FastAPI. That service is what the rest of our system talks to, it exposes an endpoint that accepts data and returns predictions in real time.

For example, when a user visits a product page on one of our platforms, our Laravel backend calls the AI model service and gets predictions like “this user has a 78% chance of buying” or “people who viewed this also viewed that.”
That’s how the intelligence quietly runs behind the scenes, fast and accurate.

How Everything Connects

Our entire system works like this:
  • Data Science Side:
    Our data science environment (Jupyter Notebook or Google Colab) is where we explore and train new models. When the model is ready, we export it and move it to our serving environment.
  • Backend (Laravel):
    The Laravel backend powers the APIs. It handles requests from our mobile and web apps, communicates with the AI service, and stores predictions or feedback data for retraining.
  • Frontend (Flutter):
    The Flutter app shows the AI results in action things like recommendations, dynamic offers, or alerts for store owners about stock levels.
  • MLOps Process:
    Everything from training, versioning, to deploying and monitoring the models is automated as much as possible. We track each model’s performance over time and retrain when needed.

The whole setup is designed to make sure our AI stays accurate, scalable, and deeply connected to the actual business logic of our commerce products.

What We’ve Learned So Far

This journey has taught us a lot. Here are some of our key takeaways so far:
  • Good data beats complex algorithms. The quality of your data determines the quality of your predictions.
  • Local context matters. Building for Nigerian commerce means considering things like internet reliability, buyer habits, and payment behaviour.
  • AI must serve business goals. We don’t build models just because it’s cool. Every model must solve a real business problem.
  • Monitoring is everything. Once deployed, we constantly check model drift and performance. If things start going off, we retrain.
  • Teamwork is the magic. The collaboration between our data science and engineering teams keeps everything balanced. science meets software 😁.

The Bigger Picture

We’re not trying to build AI for the sake of building AI. We’re trying to make African commerce smarter, one model at a time.
Every day, we’re pushing new experiments in Jupyter, deploying new FastAPI models, and connecting them to our systems systems. It’s a continuous loop of learning, building, deploying, improving.
At Gygital, our mission is simple: make commerce more intelligent, more predictive, and more human. And we’re already doing it.

Watch Out for What’s Coming Next

Everything we’ve shared here is just the beginning of what we’re building at Gygital. The models we’re training now are shaping the foundation for even bigger things like smarter recommendations, predictive analytics for sellers, and AI-driven commerce agents that can handle tasks on their own.
We’re building a future where African businesses don’t just catch up, they lead. So if you’re following our journey, stay tuned. Something exciting is coming.

Watch out for what we'll to launch next.

Cookie Settings

We use cookies to improve your experience and for marketing.
Visit our Policy page to learn more.

Operational Cookies

We need to use certain cookies to make some web pages function. That is why they do not require your consent.

gygital_cookiesconsent

1 year 1 month 1 day

gygital_session

2 hours

XSRF-TOKEN

2 hours

Analytical Cookies

We use these cookies only for internal research on how we can improve the service we offer to all our users. These cookies allow evaluating how you interact with our website.

_ga

2 years

Main cookie used by Google Analytics to distinguish one visitor from another.

_ga_ID

2 years

Used by Google Analytics to maintain session state.

_gid

1 day

Used by Google Analytics to identify a visitor.

_gat

1 minute

Used by Google Analytics to limit request rate.

Marketing Cookies

These cookies are used to enhance the relevance of our marketing campaigns. They enable us to offer you more targeted advertisements based on your interaction with our website.

_fbc

2 years

Used by Meta to identify the advertisement through which the user arrived on the site.

_fbp

3 years

Used by Meta to identify the user.

_ttp

1 year 1 month

cookies._ttp.description