Generate Test Data for MySQL With PythonFor testing purposes, especially if you’re working on a project that uses any database technology to store information, you may need data to try out your project. In that case, you have two options:

  • Find a good dataset (Kaggle) or,
  • Use a library like Faker

Through this blog post, you will learn how to generate test data for MySQL using Faker.

Requirements

Dependencies

Make sure all the dependencies are installed before creating the Python script that will generate the data for your project.

You can create a requirements.txt file with the following content:

Once you have created this file, run the following command:

Or if you’re using Anaconda, create an environment.yml file:

You can change the Python version as this script has been proven to work with these versions of Python: 3.7, 3.8, 3.9, 3.10, and 3.11.

Run the following statement to configure the project environment:

Database

Now that you have the dependencies installed, you must create a database named company.

Log into MySQL:

Or log into MySQL using MySQL Shell:

Replace root with your username, if necessary, and replace localhost with the IP address or URL for your MySQL server instance if needed.

If using MySQL Shell, change to SQL mode:

and create the company database

Fake data with Faker

Faker is a Python library that can be used to generate fake data through properties defined in the package.

The above code will print ten names, and on each call to method name(), it will produce a random value. The name() is a property of the generator. Every property of this library is called a fake. and there are many of them packaged in providers.

Some providers and properties available in the Faker library include:

You can find more information on bundled and community providers in the documentation.

Creating a Pandas DataFrame

After knowing Faker and its properties, a modules directory needs to be created, and inside the directory, we will create a module named dataframe.py. This module will be imported later into our main script, and this is where we define the method that will generate the data.

Multiprocessing is implemented for optimizing the execution time of the script, but this will be explained later. First, you need to import the required libraries:

  • pandas. Data generated with Faker will be stored in a Pandas DataFrame before being imported into the database.
  • tqdm(). Required for adding a progress bar to show the progress of the DataFrame creation.
  • Faker(). It’s the generator from the faker library.
  • cpu_count(). This is a method from the multiprocessing module that will return the number of cores available.

Faker() creates and initializes a faker generator, which can generate data by accessing the properties.

num_cores is a variable that stores the value returned after calling the cpu_count() method.

Then we define the create_dataframe() function, where:

  • x is the variable that will determine the number of iterations of the for loop where the DataFrame is created.
  • data is an empty DataFrame that will later be fulfilled with data generated with Faker.
  • Pandas DataFrame.loc attribute provides access to a group of rows and columns by their label(s). In each iteration, a row of data is added to the DataFrame and this attribute allows assigning values to each column.

The DataFrame that is created after calling this function will have the following columns:

Connection to the database

Before inserting the data previously generated with Faker, we need to establish a connection to the database, and for doing this the SQLAlchemy library will be used.

SQLAlchemy is the Python SQL toolkit and Object Relational Mapper that gives application developers the full power and flexibility of SQL.

From SQLAlchemy, we import the create_engine() and the sessionmaker() methods. The first one is for connecting to the database, and the second is for creating a session bond to the engine object.

Don’t forget to replace the user, password, and localhost with your authentication details, save this code in the modules directory and name it base.py.

From the documentation, SQLAlchemy uses the mysqlclient library by default, but there are other ones available, including PyMySQL.

According to the maintainer of both mysqlclient and PyMySQL, mysqlclient-python is much faster than PyMySQL, but you should use PyMySQL if:

  • You can’t use libmysqlclient for some reason
  • You want to use monkeypatched socket of gevent or eventlet
  • You want to hack mysql protocol

Database schema definition

The schema of the database can be created through the Schema Definition Language provided by SQLAlchemy, but as we’re only creating one table and importing the DataFrame by calling Pandas to_sql() method, this is not necessary.

When calling to_sql() method, we specify the schema as follows:

Then we pass the schema variable as a parameter to this method.

Save this code in the modules directory with the name schema.py.

What is multiprocessing?

Multiprocessing is a Python module that can be used to take advantage of the CPU cores available in the computer where the script is running. In Python, single-CPU use is caused by the global interpreter lock, which allows only one thread to carry the Python interpreter at any given time, for more information see this blog post.

Imagine that you’re generating 60,000 records, running the script in a single core will take more time than you could expect, since each record is generated one by one within the loop. By implementing multiprocessing, the whole process is divided by the number of cores, so that if your CPU has 16 cores, every core will generate 4,000 records, and this is because only 15 cores will be used as we need to leave one available for avoiding freezing the computer.

To understand better how to implement multiprocessing in Python, I recommend the following tutorials:

Generating your data

All the required modules are now ready to be imported into the main script so it’s time to create the sql.py script. First, import the required libraries:

From multiprocessing, Pool() and cpu_count() are required. The Python Multiprocessing Pool class allows you to create and manage process pools in Python.

Then, import the modules previously created:

Now we create the multiprocessing pool configured to use all available CPU cores minus one. Each core will call the create_dataframe() function and create a DataFrame with 4,000 records, and after each call to the function has finished, all the DataFrames created will be concatenated into a single one.

And finally, we will insert the DataFrame into MySQL by calling the to_sql() method. All the data will be stored in a table named employees.

By calling conn.execute(), a new column named id will be added to the table, set as the primary key, and placed at the beginning.

Run the following statement to populate the table:

DataFrame creation with multiprocessing

DataFrame creation with multiprocessing

It will take just a few seconds to generate the DataFrame with the 60,000 records, and that’s why multiprocessing was implemented.

CPU Utilization on PMM

CPU utilization on Percona Monitoring and Management

Once the script finishes, you can check the data in the database.

The count() function returns the number of records in the employees table.

The code shown in this blog post can be found on my GitHub account in the data-generator repository.

Subscribe
Notify of
guest

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Great example, and some nice tricks to use. In this case, how to configure the execution pool to process in parallel to emulate concurrent operations and test database isolation levels?