Skip to main content
Olatunde Adedeji
  • Home
  • Expertise
  • Case Studies
  • Books
  • Blog
  • Contact

Site footer

AI Applications Architect & Full-Stack Engineering Leader

Designing and delivering production-grade AI systems, grounded retrieval workflows, and cloud-native platforms for teams building real products.

Explore

  • Home
  • Expertise
  • Case Studies

Resources

  • Books
  • Blog
  • Contact

Let's Collaborate

Available for architecture advisory, AI product collaboration, technical writing, and selected consulting engagements.

  • LinkedIn
  • GitHub
  • X (Twitter)
← Blog
Machine LearningApril 2, 2026·11 min read·By Olatunde Adedeji

Understanding Machine Learning Workflow Through the Iris Dataset

A practitioner-focused walkthrough of classification with the Iris dataset using scikit-learn, from data inspection to honest model evaluation.

Introduction

Machine learning becomes much easier to understand when you stop treating it like abstract math, and start treating it like a workflow. That is why the Iris dataset remains a useful teaching tool. It is small, clean, and simple enough to help you focus on the real engineering pattern behind a classification problem: understanding the data, preparing it properly, training a model, evaluating it honestly, and interpreting the result with care.

In this walkthrough, I teach the Iris classification task from a practitioner angle. The goal is not just to make the notebook run. The goal is to let you understand what each step is doing, why it matters, and how the same thinking applies when you move on to real business datasets.

The classification problem

The Iris dataset contains measurements of iris flowers. Each row represents one flower, and the model’s job is to predict which species it belongs to based on numeric features.

The features are:

  • sepal length
  • sepal width
  • petal length
  • petal width

The labels are three flower species:

  • setosa
  • versicolor
  • virginica

This is a multi-class classification problem because the model must choose one class from three possible categories.

From a practitioner perspective, this matters because classification is one of the most common machine learning tasks in production systems. The same pattern appears in spam detection, fraud detection, document labeling, support ticket routing, medical triage, customer segmentation, and many other domains.

The Iris dataset is not important because it is too realistic. It is important because it helps you learn the mechanics without the noise of missing data, messy text, or broken records.

Starting with the data structure

In the notebook, the first useful check was understanding the type of X and y.

python
print(type(X)) print(type(y))

This showed:

  • X is a pandas DataFrame
  • y is a pandas Series

That distinction matters.

A DataFrame is a tabular structure containing multiple columns of input features. A Series is a one-dimensional structure containing the target labels.

In practical machine learning work, getting comfortable with these data structures is essential. Most beginner mistakes come from not knowing whether you are working with a NumPy array, a DataFrame, or a Series. That confusion eventually causes bugs during slicing, transformation, training, or evaluation.

So before doing anything else, always check:

  • what your input features look like
  • what your target looks like
  • whether the dimensions align
  • whether the types are what your code expects

This is basic, but it is also one of the habits that separates smooth experimentation from frustrating debugging.

Inspecting the dataset before modeling

A model should never be the first thing you run. First, inspect the data.

In the exercise, the core checks included:

  • dataset shape
  • feature names
  • target names
  • missing values

A simple inspection tells you the size and structure of the dataset:

python
print(X.shape) print(X.columns) print(y.unique()) print(X.isnull().sum())

This revealed:

  • 150 total samples
  • 4 numeric features
  • 3 target classes
  • no missing values

This is a clean dataset, which is why it is ideal for learning.

In real projects, this step often reveals the first red flags:

  • missing values
  • inconsistent types
  • class imbalance
  • duplicated records
  • strange outliers
  • leakage from the target into the features

You need to know that model quality often depends more on data quality than on model complexity. A simple model on clean data often beats a complex model on messy data.

Looking at the distributions

The next task is to plot histograms.

python
X.hist(figsize=(10, 8)) plt.show()

This is a small step, but it is exactly the kind of step people skip when they rush to train models.

Histograms help you answer questions like:

  • Are feature values spread out or tightly clustered?
  • Do some features have obvious separation between classes?
  • Are there unusual distributions or possible outliers?
  • Do certain features appear more informative than others?

With the Iris dataset, petal measurements often show stronger separation across species than sepal measurements. The histogram diagram is found in the attached notebook at the end of this article. Even before training a model, visual inspection gives you intuition about which features may help classification the most.

That is the practitioner mindset: do not let the model be the first thing that tells you about your data.

Splitting the data properly

The notebook then used a train-test split:

python
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3, random_state=13 )

This is one of the most important habits in machine learning.

You train the model on one portion of the data, and evaluate it on another portion it has not seen before. That gives you a more honest estimate of how well the model generalizes.

Here:

  • 70% of the data was used for training
  • 30% was used for testing
  • random_state=13 makes the split reproducible

The test set in this case contained 45 samples.

In real-world systems, careful splitting becomes even more important. You may need:

  • stratified splits to preserve class balance
  • time-based splits for temporal data
  • group-based splits to avoid leaking related records across train and test
  • separate validation sets for hyperparameter tuning

The simple train-test split in Iris teaches the principle. The production version of the principle becomes stricter and more context-aware.

Choosing a model: why a Decision Tree?

The exercise used a DecisionTreeClassifier:

python
from sklearn.tree import DecisionTreeClassifier model = DecisionTreeClassifier(random_state=13) model.fit(X_train, y_train)

A decision tree is a good teaching model because it is intuitive.

It learns by making a series of splits on features. For example:

  • if petal length is below some threshold, go left
  • otherwise go right
  • keep splitting until classes are well separated

The appeal of decision trees is that they are:

  • easy to train
  • easy to visualize conceptually
  • capable of handling non-linear decision boundaries
  • interpretable compared with many more complex models

From a practitioner angle, this is a strong baseline model. You do not always need the most sophisticated algorithm first. In fact, you usually should not start there.

A sensible workflow is:

  1. build a simple baseline
  2. measure it honestly
  3. understand its behavior
  4. improve only if needed

This discipline prevents unnecessary complexity.

Training the model

Training is where the model learns patterns from the labeled data:

python
model.fit(X_train, y_train)

This single line hides a lot of work. During fitting, the decision tree examines the training data and determines splits that best separate the classes.

But you need to know that fit() is not magic. It only works well if the earlier steps were done properly:

  • the data is correctly structured
  • the labels align with the inputs
  • the split is valid
  • the features contain signal
  • there is no leakage or contamination

This is why machine learning should be taught as a system, not as isolated commands.

Making predictions

Once the model is trained, it can make predictions on unseen data:

python
y_pred = model.predict(X_test) print(y_pred[:5])

This produces predicted class labels for the test set.

At this point, many beginners feel the model is “done.” It is not. Prediction is only one phase. Evaluation is where the real judgment begins.

A working prediction pipeline is not the same thing as a trustworthy model.

Evaluating accuracy the right way

Here, both training accuracy and test accuracy are measured:

python
train_acc = model.score(X_train, y_train) test_acc = model.score(X_test, y_test) print(train_acc) print(test_acc)

The results were:

  • training accuracy: 0.9905
  • test accuracy: 0.9778

These are strong numbers, and on Iris they are not surprising. The dataset is well-structured and relatively easy for many classifiers.

But this is where practitioner thinking matters most.

Why compare training and test accuracy?

Because the gap between them tells a story.

  • If training accuracy is very high and test accuracy is much lower, the model may be overfitting.
  • If both are low, the model may be underfitting or the features may not carry enough predictive signal.
  • If both are high and relatively close, the model may be generalizing reasonably well.

Here, the scores are both high and close to one another. That is a good sign.

Still, a serious practitioner would not stop at accuracy alone in a real project.

Depending on the domain, you may also need- confusion matrix, precision, recall, F1-score, ROC-AUC, class-wise performance, calibration, fairness checks, and error analysis by subgroup

Accuracy is acceptable for a classroom notebook. It is rarely enough for a production decision system.

What this exercise teaches beyond Iris

The attached notebook may look simple, but it teaches a durable machine learning workflow:

1. Know your data structures

Understand whether your features and labels are stored in compatible and expected forms.

2. Inspect before modeling

Check shape, labels, missing values, and feature distributions before training any model.

3. Split honestly

Separate training and test data so you can measure generalization instead of memorization.

4. Start with a baseline

Use a model that is fast, understandable, and sufficient for a first pass.

5. Evaluate with discipline

Compare training and test performance and interpret the gap, not just the absolute number.

6. Build intuition, not just code

Use plots and basic summaries to understand why the model behaves the way it does.

These habits transfer directly to real-world projects, even when the datasets become larger, noisier, and more complex.

Common beginner mistakes this exercise helps prevent

One reason I like this kind of exercise is that it exposes the foundation clearly enough to prevent common mistakes later.

Training before understanding the data

Many people go straight to fit() without even checking whether the dataset is clean or correctly labeled.

Ignoring reproducibility

If you do not set random_state, it becomes harder to compare runs and debug behavior.

Confusing training success with real performance

A model that performs well on training data is not necessarily useful. The test set is what gives you a reality check.

Treating high accuracy as the whole story

Accuracy can hide class-specific problems or imbalance. It is a starting point, not the final word.

Using complex models too early

You learn more by starting simple and understanding the result than by importing a powerful model you cannot explain.

These are not just student mistakes. They show up in production teams too.

How I would extend this exercise in practice

If I were turning this from a starter notebook into a stronger practitioner exercise, I would extend it in a few ways.

Add a confusion matrix

This would show which species are most often confused and give more detail than accuracy alone.

Compare multiple models

It would be useful to compare:

  • decision tree
  • logistic regression
  • k-nearest neighbors
  • random forest

That helps learners see that machine learning is often about trade-offs, not one universally best model.

Use cross-validation

A single train-test split is easy, but cross-validation gives a more stable estimate of performance on small datasets.

Tune hyperparameters

For a decision tree, parameters like max_depth, min_samples_split, and min_samples_leaf affect complexity and generalization.

Visualize the tree

Decision trees are especially good for teaching because the split logic can be visualized.

Add error analysis

Even on Iris, reviewing misclassified examples helps build better instinct.

That is how practitioners grow: not by stopping at “it worked,” but by asking “what does this result actually mean?”

Why this matters for larger real-world datasets

The Iris dataset is tiny and clean. Your business data probably will not be.

Real classification problems often involve:

  • missing values
  • text or image data
  • imbalanced classes
  • noisy labels
  • thousands of features
  • privacy constraints
  • drift over time
  • limited explainability
  • pressure to deploy too early

But the workflow still begins in the same place:

  • inspect the data
  • define the target clearly
  • split responsibly
  • train a baseline
  • evaluate honestly
  • improve with evidence

That is why exercises like this still matter. They teach the underlying principle of machine learning work.

Final Thoughts

This Iris classification exercise is not valuable because it is difficult. It is valuable because it teaches the right habits in a controlled setting.

You learn how to:

  • inspect your inputs and labels
  • understand the dataset before modeling
  • visualize feature distributions
  • split data for honest evaluation
  • train a simple classifier
  • measure both training and test accuracy
  • reason about the result instead of just printing it

That is the real lesson.

A practitioner does not just write code that runs. A practitioner builds workflows that can be trusted, explained, and improved.

If you can learn that on a small dataset like Iris, you are building the right foundation for more serious machine learning work.


The full notebook for this walkthrough is available on GitHub: understanding_ml_workflow_iris_dataset.ipynb

Machine LearningClassificationscikit-learnPythonData Science
Share
XLinkedInFacebook