Generating big data sets for search engines

NOTE: This is the English version. You will find the French version further down in this article.

When proposing our expertise search, we are often asked to do performance evaluations on large datasets, for instance in Proof of Concepts. For a recent customer request, in order to gain time and to not use sensitive customer data, we have used log-synth, a random data generator developed by Ted Dunning. We are describing here how to use log-synth in order to generate a 100.000 lines data set.

The first step, which we don’t document here, is about downloading log-synth, unzipping it and building it with maven.

The second step is about creating a schema that will be describing the way log-synth must generate each line. In our case, the goal is to generate log lines with the following format:

{"uuid":"41775b31-5435-4579-9803-99d78eb0512d","server":"FL-01","date":"2015-07-14","nb_files":53,"status":"RUNNING"}
Off course, for each of these attributes, the values are picked randomly within a predefined set of values.
We thus create the schema-francelabs.json file:

[
	{"name": "uuid","class": "uuid"},
	{"name":"server", "class":"string", "dist":{"FL-01":1, "FL-02":1, "FL-03":1, "FL-04":1, "FL-05":1, "FL-06":1, "FL-07":1}},
	{"name": "date", "class": "date", "format": "yyyy-MM-dd", "start":"2015-01-01", "end":"2015-12-31"},
	{"name": "nb_files","class": "int","min": 1,"max": 100},
	{"name": "status", "class":"string", "dist":{"RUNNING":1, "OK":1, "ERROR":0.05}}
]

For each attribute, we define its name using the tag “name” and its type thanks to the tag “class”. All the types managed by log-synth as well as how to use them are listed and detailed in the log-synth documentation on github.

Our schema is made of 5 attributes :

  • “uuid” : holds a uuid generated by log-synth
  • “server” : holds a value randomly picked among the set [“FL-01″,”FL-02”, “FL-03”, “FL-04”, “FL-05”, “FL-06”, “FL-07”]. Each value has the same weight, thus they all have the same probability of being selected for each newly generated line
  • “date” : holds a date formatted as “yyyy-MM-dd”, randomly picked between 2015-01-01 and 2015-12-31
  • “nb_files” : holds an integer randomly picked between 1 and 100
  • “status” : holds a value randomly picked among the set [“RUNNING”,”OK”,”ERROR”], knowing that the value “ERROR” has a very weak probability of being selected as its weight is much smaller than the two others (0,05 against 1)

The last step is about executing log-synth specifying the data schema and the number of lines it must generate:

log-synth -count 100000 -schema schema-francelabs.json -format JSON -output output/

The “count” parameters allows to set the number of lines to be generated, “format” sets the format (JSON in our case) and “output” declares the output folder (output in our case).

And voilà, here comes the generated result. For information, on a machine with a 4 physical cores CPU and 16 GB of RAM, it took approx. 1.5 seconds :

{"uuid":"cfaa6bdc-825e-41ac-82c8-0cb162c0e3f1","server":"FL-07","date":"2015-12-20","nb_files":67,"status":"ERROR"}
{"uuid":"8c8ef3d4-bc81-4ef7-ba91-15661d881c55","server":"FL-04","date":"2015-05-06","nb_files":18,"status":"RUNNING"}
{"uuid":"8d78cc4b-72a5-4ac2-ab3f-7e81f4dbc4e7","server":"FL-04","date":"2015-11-09","nb_files":64,"status":"RUNNING"}
{"uuid":"dc38bec9-0ffa-41b3-ae0a-5bbb65633358","server":"FL-04","date":"2015-04-23","nb_files":86,"status":"OK"}
{"uuid":"95ac609e-ec8a-4ed0-ac63-2fd6cc4ccaaf","server":"FL-06","date":"2015-03-23","nb_files":35,"status":"RUNNING"}
{"uuid":"3bf2d44b-1044-42cb-9e30-eddfe46419bd","server":"FL-07","date":"2015-05-23","nb_files":34,"status":"OK"}
{"uuid":"53838295-ba7c-4f2a-a14a-2397d41fbcde","server":"FL-06","date":"2015-01-09","nb_files":50,"status":"OK"}
{"uuid":"2ccef5fe-ca99-4d97-9e23-6b5c5ebb30d0","server":"FL-03","date":"2015-02-01","nb_files":77,"status":"OK"}
{"uuid":"c1516d8d-cee7-432f-9809-11edf27d15c0","server":"FL-01","date":"2015-06-05","nb_files":61,"status":"OK"}
{"uuid":"103cd433-deee-426a-83ca-38e7368628e8","server":"FL-03","date":"2015-01-22","nb_files":80,"status":"RUNNING"}
{"uuid":"2c57202e-b4da-4e20-a625-38b42ce4c84f","server":"FL-03","date":"2015-02-06","nb_files":32,"status":"RUNNING"}
{"uuid":"6f40c234-1645-4cdb-8080-ad7498fdf784","server":"FL-01","date":"2015-01-09","nb_files":33,"status":"RUNNING"}
{"uuid":"e6424e56-ddff-45ca-8062-001ac76ae574","server":"FL-04","date":"2015-11-10","nb_files":93,"status":"OK"}
{"uuid":"1f09f8cf-b785-4814-98bf-71847259b2a6","server":"FL-01","date":"2015-12-03","nb_files":68,"status":"OK"}
{"uuid":"eea96f45-79b8-4c5f-b114-3f9bcab3fc81","server":"FL-03","date":"2015-12-06","nb_files":47,"status":"OK"}
{"uuid":"86671321-b640-4336-95d5-7ca28a954d6f","server":"FL-06","date":"2015-04-27","nb_files":84,"status":"OK"}
{"uuid":"e8ee3409-7083-411a-be1e-2f22f2c852ee","server":"FL-01","date":"2015-12-25","nb_files":69,"status":"RUNNING"}
{"uuid":"7b17b1a5-fe04-4a09-936b-5d43e2da71fb","server":"FL-04","date":"2015-02-26","nb_files":19,"status":"RUNNING"}
{"uuid":"b46df22d-2efa-4452-9d9c-507a53ea4f54","server":"FL-02","date":"2015-12-11","nb_files":28,"status":"OK"}
{"uuid":"3d866f7d-bcfa-43f8-824e-b6c38fb4f47f","server":"FL-03","date":"2015-11-03","nb_files":33,"status":"RUNNING"}
...

The generated file can then easily be inserted in an Elasticsearch or Solr index through a simple Curl command. One can also use Logstash with ES for a a ‘run-of-river’ insertion or a more structured one.


 

FRENCH VERSION:

Quand on fait du consulting dans le domaine du search, il n’est pas rare de devoir créer un ou plusieurs jeux de test de taille conséquente pour mettre en pratique et valider un PoC (Proof of Concept). C’est pourquoi, dans le cadre d’une récente demande client, afin de gagner un temps précieux sur la génération d’un jeu de données volumineux, nous avons décidé d’utiliser log-synth, un outil de génération de données aléatoires développé par Ted Dunning.
Nous allons ici décrire la manière dont nous avons utilisé log-synth afin générer un jeu de données de 100 000 lignes.

La première étape consiste bien évidemment à télécharger log-synth, le décompresser et le builder à l’aide de maven.

La seconde étape consiste à créer un schéma qui va décrire la façon dont log-synth doit générer chaque ligne. Dans notre cas, le but est de générer des lignes de log ayant le format suivant :
{"uuid":"41775b31-5435-4579-9803-99d78eb0512d","server":"FL-01","date":"2015-07-14","nb_files":53,"status":"RUNNING"}
En ayant bien entendu des valeurs choisies aléatoirement parmi un ensemble prédéfini pour chaque attribut.
On créé donc le fichier schema-francelabs.json:

[
	{"name": "uuid","class": "uuid"},
	{"name":"server", "class":"string", "dist":{"FL-01":1, "FL-02":1, "FL-03":1, "FL-04":1, "FL-05":1, "FL-06":1, "FL-07":1}},
	{"name": "date", "class": "date", "format": "yyyy-MM-dd", "start":"2015-01-01", "end":"2015-12-31"},
	{"name": "nb_files","class": "int","min": 1,"max": 100},
	{"name": "status", "class":"string", "dist":{"RUNNING":1, "OK":1, "ERROR":0.05}}
]

Pour chaque attribut, on définie son nom grâce au tag “name” et son type grâce au tag “class”. Tous les types pris en charge par log-synth ainsi que la manière de les utiliser sont décrits en détail dans la documentation de celui-ci sur github.

Notre schéma définit ainsi 5 attributs :

  • “uuid” : contiendra un uuid généré par log-synth
  • “server” : contiendra aléatoirement une des valeurs parmi l’ensemble [“FL-01″,”FL-02”, “FL-03”, “FL-04”, “FL-05”, “FL-06”, “FL-07”]. Chaque valeur ayant le même poids, elles ont toutes la même probabilité d’être séléctionnées à la génération d’une nouvelle ligne
  • “date” : contiendra une date aléatoire au format “yyyy-MM-dd” comprise entre 2015-01-01 et 2015-12-31
  • “nb_files” : contiendra une valeur numérique comprise entre 1 et 100
  • “status” : contiendra une des valeurs parmi l’ensemble [“RUNNING”,”OK”,”ERROR”] sachant que la valeur “ERROR” a une très faible probabilité d’être sélectionnée car elle a un poids beaucoup plus faible que les autres valeurs (0.05 contre 1)

La dernière étape consiste à exécuter log-synth en spécifiant le schéma des données et le nombre de fois que celui-ci doit en générer ;

log-synth -count 100000 -schema schema-francelabs.json -format JSON -output output/

Le paramètre “count” permet de définir le nombre de lignes à générer, on définie le format de sortie JSON et un dossier “output”.

Et voici le résultat obtenu. Pour information, sur une machine avec un CPU à 4 coeurs physiques et 16 Go RAM, çela demande environ 1.5 secondes :

{"uuid":"cfaa6bdc-825e-41ac-82c8-0cb162c0e3f1","server":"FL-07","date":"2015-12-20","nb_files":67,"status":"ERROR"}
{"uuid":"8c8ef3d4-bc81-4ef7-ba91-15661d881c55","server":"FL-04","date":"2015-05-06","nb_files":18,"status":"RUNNING"}
{"uuid":"8d78cc4b-72a5-4ac2-ab3f-7e81f4dbc4e7","server":"FL-04","date":"2015-11-09","nb_files":64,"status":"RUNNING"}
{"uuid":"dc38bec9-0ffa-41b3-ae0a-5bbb65633358","server":"FL-04","date":"2015-04-23","nb_files":86,"status":"OK"}
{"uuid":"95ac609e-ec8a-4ed0-ac63-2fd6cc4ccaaf","server":"FL-06","date":"2015-03-23","nb_files":35,"status":"RUNNING"}
{"uuid":"3bf2d44b-1044-42cb-9e30-eddfe46419bd","server":"FL-07","date":"2015-05-23","nb_files":34,"status":"OK"}
{"uuid":"53838295-ba7c-4f2a-a14a-2397d41fbcde","server":"FL-06","date":"2015-01-09","nb_files":50,"status":"OK"}
{"uuid":"2ccef5fe-ca99-4d97-9e23-6b5c5ebb30d0","server":"FL-03","date":"2015-02-01","nb_files":77,"status":"OK"}
{"uuid":"c1516d8d-cee7-432f-9809-11edf27d15c0","server":"FL-01","date":"2015-06-05","nb_files":61,"status":"OK"}
{"uuid":"103cd433-deee-426a-83ca-38e7368628e8","server":"FL-03","date":"2015-01-22","nb_files":80,"status":"RUNNING"}
{"uuid":"2c57202e-b4da-4e20-a625-38b42ce4c84f","server":"FL-03","date":"2015-02-06","nb_files":32,"status":"RUNNING"}
{"uuid":"6f40c234-1645-4cdb-8080-ad7498fdf784","server":"FL-01","date":"2015-01-09","nb_files":33,"status":"RUNNING"}
{"uuid":"e6424e56-ddff-45ca-8062-001ac76ae574","server":"FL-04","date":"2015-11-10","nb_files":93,"status":"OK"}
{"uuid":"1f09f8cf-b785-4814-98bf-71847259b2a6","server":"FL-01","date":"2015-12-03","nb_files":68,"status":"OK"}
{"uuid":"eea96f45-79b8-4c5f-b114-3f9bcab3fc81","server":"FL-03","date":"2015-12-06","nb_files":47,"status":"OK"}
{"uuid":"86671321-b640-4336-95d5-7ca28a954d6f","server":"FL-06","date":"2015-04-27","nb_files":84,"status":"OK"}
{"uuid":"e8ee3409-7083-411a-be1e-2f22f2c852ee","server":"FL-01","date":"2015-12-25","nb_files":69,"status":"RUNNING"}
{"uuid":"7b17b1a5-fe04-4a09-936b-5d43e2da71fb","server":"FL-04","date":"2015-02-26","nb_files":19,"status":"RUNNING"}
{"uuid":"b46df22d-2efa-4452-9d9c-507a53ea4f54","server":"FL-02","date":"2015-12-11","nb_files":28,"status":"OK"}
{"uuid":"3d866f7d-bcfa-43f8-824e-b6c38fb4f47f","server":"FL-03","date":"2015-11-03","nb_files":33,"status":"RUNNING"}
...

Le fichier généré peut ensuite être facilement inséré dans un index Elasticsearch ou Solr en utilisant une simple commande Curl. On peut également utiliser Logstash avec ES pour une insertion au fil de l’eau ou plus structurée.