Sources are plugins that allow to collect data from different origins.
The following sources plugins are included as part of Flowbber.
This source parses standard Cobertura XML files.
A Cobertura file describes the coverage of source code executed by tests. This files can be generated for several programming languages, including:
.gcno
and .gcda
coverage files,
then processing the source tree with lcov to generate a coverage.info
which is finally converted to Cobertura using lcov_cobertura.-coverprofile
file and then processing
the resulting coverage.json
with gocov-xml.Data collected:
{
"files": {
"my_source_code.c": {
"total_statements": 40,
"total_misses": 20,
"line_rate": 0.5
},
"another_source.c": {
"total_statements": 40,
"total_misses": 40,
"line_rate": 0.0
}
},
"total": {
"total_statements": 80,
"total_misses": 20,
"line_rate": 0.75
}
}
Dependencies:
pip3 install flowbber[cobertura]
Usage:
[[sources]]
type = "cobertura"
id = "..."
[sources.config]
xmlpath = "coverage.xml"
include = ["*"]
exclude = []
{
"sources": [
{
"type": "cobertura",
"id": "...",
"config": {
"xmlpath": "coverage.xml",
"include": ["*"],
"exclude": []
}
}
]
}
Path to the Cobertura coverage.xml
file to be parsed.
Default: N/A
Optional: False
Schema:
{
'type': 'string',
'empty': False,
}
Secret: False
List of patterns of files to include.
Matching is performed using Python’s fnmatch.
Default: ['*']
Optional: True
Schema:
{
'type': 'list',
'schema': {
'type': 'string',
},
}
Secret: False
List of paths to files containing patterns of files to include.
Matching is performed using Python’s fnmatch.
All unique patterns parsed from these files will be added to the ones defined
in the include
configuration option.
Default: []
Optional: True
Schema:
{
'type': 'list',
'schema': {
'type': 'string',
'empty': False,
},
}
Secret: False
List of patterns of files to exclude.
Matching is performed using Python’s fnmatch.
Default: []
Optional: True
Schema:
{
'type': 'list',
'schema': {
'type': 'string',
},
}
Secret: False
List of paths to files containing patterns of files to exclude.
Matching is performed using Python’s fnmatch.
All unique patterns parsed from these files will be added to the ones defined
in the exclude
configuration option.
Default: []
Optional: True
Schema:
{
'type': 'list',
'schema': {
'type': 'string',
'empty': False,
},
}
Secret: False
This sources let the user indicate static data to describe current pipeline.
Data collected:
Anything the user entered on the configuration
Dependencies:
pip3 install flowbber[config]
Usage:
[[sources]]
type = "config"
id = "..."
[sources.config.data]
anydata = "...."
{
"sources": [
{
"type": "config",
"id": "...",
"config": {
"data": {
"anydata": "...."
}
}
}
]
}
This source collects information about the system’s CPUs load.
Data collected:
{
"num_cpus": 2,
"system_load": 25.0,
"per_cpu": [40.0, 10.0]
}
Dependencies:
pip3 install flowbber[cpu]
Usage:
[[sources]]
type = "cpu"
id = "..."
{
"sources": [
{
"type": "cpu",
"id": "...",
"config": {}
}
]
}
This source collects environment variables.
Danger
This source can leak sensitive data. Please adjust include
and
exclude
patterns accordingly.
By default include
and exclude
patterns are empty, so no data will
be collected. See Usage below for examples.
Data collected:
{
"pythonhashseed": 720667772
}
Dependencies:
pip3 install flowbber[env]
Usage:
Variables will be collected if the name of the variable match any item in the
include
list but DOES NOT match any item in the exclude
list.
Variables names will be stored in lowercase if the lowercase
option is set
to true
(the default).
Optionally, a type can be specified for each environment variable, so that it is parsed, interpreted and collected with the expected datatype.
[[sources]]
type = "env"
id = "..."
[sources.config]
include = ["PYTHONHASHSEED"]
exclude = []
lowercase = true
[sources.config.types]
PYTHONHASHSEED = "integer"
{
"sources": [
{
"type": "env",
"id": "...",
"config": {
"include": [
"PYTHONHASHSEED"
],
"exclude": [],
"lowercase": true,
"types": {
"PYTHONHASHSEED": "integer"
}
}
}
]
}
Filtering examples:
Collect all environment variables
{
"include": ["*"],
"exclude": []
}
Collect all except a few
{
"include": ["*"],
"exclude": ["*KEY*", "*SECRET*"]
}
Collect only the ones specified
{
"include": ["PYTHONHASHSEED"],
"exclude": []
}
Using with Jenkins CI
This source is very helpful to collect information from Jenkins CI:
[[sources]]
type = "env"
id = "jenkins"
[sources.config]
include = [
"BUILD_NUMBER",
"JOB_NAME",
"GIT_COMMIT",
"GIT_URL",
"GIT_BRANCH",
"BUILD_TIMESTAMP"
]
lowercase = false
[sources.config.types]
BUILD_NUMBER = "integer"
BUILD_TIMESTAMP = "iso8601"
Note
To parse the BUILD_TIMESTAMP
variable as ISO 8601 the format needs to be
set to ISO 8601. For more information visit:
https://wiki.jenkins.io/display/JENKINS/Build+Timestamp+Plugin
{
"sources": [
{
"type": "env",
"id": "jenkins",
"config": {
"include": [
"BUILD_NUMBER",
"JOB_NAME",
"GIT_COMMIT",
"GIT_URL",
"GIT_BRANCH",
"BUILD_TIMESTAMP"
],
"lowercase": false,
"types": {
"BUILD_NUMBER": "integer"
}
}
}
]
}
List of patterns of environment variables to include.
Matching is performed using Python’s fnmatch.
Default: []
Optional: True
Schema:
{
'type': 'list',
'schema': {
'type': 'string',
'empty': False,
},
}
Secret: False
List of patterns of environment variables to exclude.
Matching is performed using Python’s fnmatch.
Default: []
Optional: True
Schema:
{
'type': 'list',
'schema': {
'type': 'string',
'empty': False,
},
}
Secret: False
Store variables names in lowercase.
Default: True
Optional: True
Schema:
{
'type': 'boolean',
}
Secret: False
Specify the data type of the environment variables.
At the time of this writing, the types allowed are:
int()
function.float()
function.str()
function.flowbber.utils.types.autocast()
.flowbber.utils.types.booleanize()
.flowbber.utils.iso8601.iso8601_to_datetime()
.Default: None
Optional: True
Schema:
{
'type': 'dict',
'keysrules': {
'type': 'string',
'empty': False,
},
'valuesrules': {
'type': 'string',
'empty': False,
'allowed': list(TYPE_PARSERS),
},
}
Secret: False
This source collects information from a (this) local git repository:
The directory
option can be used to specify a path to a local git
repository to get information from. In particular, if your CI system created
an out-of-repository build directory you can always get the context of the
repository the pipeline definition file is commited to using:
[[sources]]
type = "git"
id = "..."
[sources.config]
directory = "{git.root}"
{
"sources": [
{
"type": "git",
"id": "...",
"config": {
"directory": "{git.root}"
}
}
]
}
Data collected:
{
"root": "/home/kuralabs/flowbber",
"branch": "master",
"rev": "9742e30",
"tag": "2.0.0",
"name": "The Author",
"email": "author@domain.com",
"subject": "Added a new Git source.",
"body": "",
"date": "2017-09-13T01:27:55-06:00"
}
Dependencies:
pip3 install flowbber[git]
Usage:
[[sources]]
type = "git"
id = "..."
[sources.config]
directory = "."
{
"sources": [
{
"type": "git",
"id": "...",
"config": {
"directory": "."
}
}
]
}
This source collects pull requests and issues statistics from a GitHub repository using the GitHub API v3. This source requires a personal access token, you can create one in your settings. No particular scope is required.
Data collected:
{
"issue": {
"closed": 1,
"open": 2
},
"pr": {
"closed": 3,
"open": 4
}
}
Dependencies:
pip3 install flowbber[github]
Usage:
[[sources]]
type = "github"
id = "..."
[sources.config]
token = "abcdefabcdefabcdefabcdef"
repository = "organization/repository"
base_url = "https://api.github.com"
{
"sources": [
{
"type": "github",
"id": "...",
"config": {
"token": "abcdefabcdefabcdefabcdef",
"repository": "organization/repository",
"base_url": "https://api.github.com"
}
}
]
}
Personal access token to use to connect to the GitHub API. Keep this value safe and secret.
Default: N/A
Optional: False
Schema:
{
'type': 'string',
'empty': False,
}
Secret: True
Name of the repository to query for in the form organization/repository
.
Default: N/A
Optional: False
Schema:
{
'type': 'string',
'empty': False,
}
Secret: False
Base URL to connect to the GitHub v3 API.
For public GitHub, the default https://api.github.com
should be used.
For private GitHub Enterprise instances use
https://github.yourdomain.com/api/v3
or similar.
Default: https://api.github.com
Optional: True
Schema:
{
'type': 'string',
'empty': False,
}
Secret: False
This source parses the JUnit like results XML file generated by Google Test.
Data collected:
{
"failures": 1,
"disabled": 1,
"errors": 1,
"tests": 1,
"time": 10.555,
"timestamp": "2017-09-13T00:51:51",
"properties": {
"<propname1>": "<propvalue1>"
},
"suites": {
"<suitename1>": {
"cases": {
"<casename1>": {
"status": "<PASS|FAIL|SKIP>",
"time": 0.05,
"properties": {
"<propname1>": "<propvalue1>"
}
},
"<casename2>": {
"status": "<PASS|FAIL|SKIP>",
"time": 0.05,
"properties": {
"<propname1>": "<propvalue1>"
}
}
},
"properties": {
"<propname1>": "<propvalue1>"
},
"failures": 1,
"passed": 1,
"disabled": 1,
"errors": 1,
"tests": 1,
"time": 0.456
}
}
}
In addition to the previous data structure, if status is FAIL
an additional
key failures
will be available with a list of failures found:
{
# ...
'failures': [
'/home/kuralabs/googletest-example/tests/test2.cpp:12\n'
'Expected: 0\n'
'To be equal to: 1',
]
}
Dependencies:
pip3 install flowbber[gtest]
Usage:
[[sources]]
type = "gtest"
id = "..."
[sources.config]
xmlpath = "tests.xml"
{
"sources": [
{
"type": "gtest",
"id": "...",
"config": {
"xmlpath": "tests.xml"
}
}
]
}
This source fetch and parses a local or remote (http, https) json file.
Data collected:
Same as the source file
Dependencies:
pip3 install flowbber[json]
Usage:
[[sources]]
type = "json"
id = "..."
[sources.config]
file_uri = "file://{pipeline.dir}/file.json"
encoding = "utf-8"
ordered = true
verify_ssl = true
extract = true
{
"sources": [
{
"type": "json",
"id": "...",
"config": {
"file_uri": "file://{pipeline.dir}/file.json",
"encoding": "utf-8",
"ordered": true,
"verify_ssl": true,
"extract": true
}
}
]
}
URI to the JSON file. If no schema is specified, file://
will be
used.
Supported schemas:
file://
(the default): File system path to the JSON file.
file://path/to/file.json
Or if using Substitutions:
file://{pipeline.dir}/file.json
http[s]://
: URL to download the JSON file.
https://mydomain.com/archive/file.json
Default: N/A
Optional: False
Schema:
{
'type': 'string',
'empty': False,
}
Secret: False
Encoding to use to decode the file.
Default: utf-8
Optional: True
Schema:
{
'type': 'string',
'empty': False,
}
Secret: False
Parses the file in order and returns it as a py:class:collections.OrderedDict instead of an unordered dictionary.
Default: False
Optional: True
Schema:
{
'type': 'boolean'
}
Secret: False
Enable or disable SSL verification. This option only applies if the https
schema is used.
Default: True
Optional: True
Schema:
{
'type': 'boolean'
}
Secret: False
Extract the JSON file from a Zip archive.
If using extraction, the .zip
extension will be automatically appended
to the file_uri
filename parameter if not present.
It is expected that the file inside the archive matches the file_uri
filename without the .zip
extension. For example:
extract parameter |
file_uri parameter |
File loaded | Expected file inside Zip |
---|---|---|---|
True |
xxx://archive.json |
xxx://archive.json.zip |
archive.json |
True |
xxx://archive.json.zip |
xxx://archive.json.zip |
archive.json |
Default: False
Optional: True
Schema:
{
'type': 'boolean',
}
Secret: False
This source calls lcov on a specified directory to generate a tracefile or loads one directly, and process it with lcov_cobertura to create a standard Cobertura XML file, which in turn is then parsed using the flowbber Cobertura source.
Note
This source requires the lcov
executable to be available in your system
to run.
Data collected:
{
"files": {
"my_source_code.c": {
"total_statements": 40,
"total_misses": 20,
"branch_rate": 0.5,
"total_hits": 8,
"line_rate": 0.5
},
"another_source.c": {
"total_statements": 40,
"total_misses": 40,
"branch_rate": 0.5,
"total_hits": 8,
"line_rate": 0.0
}
},
"total": {
"total_statements": 80,
"total_misses": 20,
"line_rate": 0.75
},
"tracefile": "<path-to-tracefile.info>"
}
Dependencies:
pip3 install flowbber[lcov]
Usage:
[[sources]]
type = "lcov"
id = "..."
[sources.config]
source = "{pipeline.dir}"
rc_overrides = ["lcov_branch_coverage=1"]
remove = ["*hello2*"]
remove_files = [
"/file/with/remove/patterns",
".removepatterns"
]
extract = ["*hello1*"]
extract_files = [
"/file/with/extract/patterns",
".extractpatterns"
]
derive_func_data = false
{
"sources": [
{
"type": "lcov",
"id": "...",
"config": {
"source": "{pipeline.dir}",
"rc_overrides": ["lcov_branch_coverage=1"],
"remove": ["*hello2*"]
"remove_files": [
"/file/with/remove/patterns",
".removepatterns"
],
"extract": ["*hello1*"],
"extract_files": [
"/file/with/extract/patterns",
".extractpatterns"
],
"derive_func_data": false,
}
}
]
}
Path to the directory containing gcov’s .gcda
files or path to a tracefile
.info
file.
Default: N/A
Optional: False
Schema:
{
'type': 'string',
'empty': False
}
Secret: False
Override lcov configuration file settings.
Elements should have the form SETTING=VALUE
.
Default: []
Optional: False
Schema:
{
'type': 'list',
'schema': {
'type': 'string',
'empty': False
},
}
Secret: False
List of patterns of files to remove from coverage computation.
Patterns will be interpreted as shell wild‐card patterns.
Default: []
Optional: True
Schema:
{
'type': 'list',
'schema': {
'type': 'string',
'empty': False,
},
}
Secret: False
List of paths to files containing patterns of files to remove from coverage computation.
Patterns will be interpreted as shell wild‐card patterns.
All unique patterns parsed from these files will be added to the ones defined
in the remove
configuration option.
Default: []
Optional: True
Schema:
{
'type': 'list',
'schema': {
'type': 'string',
'empty': False,
},
}
Secret: False
List of patterns of files to extract for coverage computation.
Use this option if you want to extract coverage data for only a particular set of files from a tracefile. Patterns will be interpreted as shell wild‐card patterns.
Default: []
Optional: True
Schema:
{
'type': 'list',
'schema': {
'type': 'string',
'empty': False,
},
}
Secret: False
List of paths to files containing patterns of files to extract for coverage computation.
Patterns will be interpreted as shell wild‐card patterns.
All unique patterns parsed from these files will be added to the ones defined
in the extract
configuration option.
Default: []
Optional: True
Schema:
{
'type': 'list',
'schema': {
'type': 'string',
'empty': False,
},
}
Secret: False
Allow lcov to calculate function coverage data from line coverage data.
If True
then the --derive-func-data
option is used on the lcov
commands. If False
then the option is not used.
This option is used to collect function coverage data, even when this data is not provided by the installed gcov tool. Instead, lcov will use line coverage data and information about which lines belong to a function to derive function coverage.
Default: False
Optional: True
Schema:
schema={
'type': 'boolean',
},
Secret: False
This source parses the JUnit like results XML file generated by pytest.
Data collected:
{
"errors": 0,
"failures": 2,
"skips": 0,
"tests": 4,
"passed": 6,
"time": 0.047,
"suites": {
"<suite_name>": {
"errors": 0,
"failures": 2,
"skips": 0,
"tests": 4,
"time": 0.047,
"cases": {
"<classname>.<name>": {
"status": "<PASS|FAIL|ERROR|SKIP>",
"file": "test/test_file.py",
"line": 19,
"classname": "test.test_file",
"name": "test_function",
"time": 0.0012459754943847656,
"properties": [
{"<propname1>": "<propvalue1>"},
{"<propname2>": "<propvalue2>"}
]
}
}
}
}
}
Warning
Most of the time the <suite_name>
will be set to pytest
, but
do not hardwire your code to that value. It can be changed by changing your
pytest.ini
file:
In addition to the previous data structure, if status is FAIL
, ERROR
or SKIP
an additional key failure
, error
or skipped
will be
available with information of the problem:
{
# ...
'error': {
'code': (
'@pytest.fixture\n'
' def dofail():\n'
'> raise RuntimeError()\n'
'E RuntimeError\n\nconftest.py:14: RuntimeError'
),
'message': 'test setup failure',
}
}
The information object always have a message
and a code
key describing
what the issue is and where it happened.
Dependencies:
pip3 install flowbber[pytest]
Usage:
[[sources]]
type = "pytest"
id = "..."
[sources.config]
xmlpath = "tests.xml"
{
"sources": [
{
"type": "pytest",
"id": "...",
"config": {
"xmlpath": "tests.xml"
}
}
]
}
This source count source lines of code. It scans a directory for source code files, identify their language and count the number of code, comments and empty lines. It is highly tunable using include and exclude patterns as explained below. It is implemented on top of pygount, which is very accurate, but not the fastest. Benchmark on HUGE code bases.
Data collected:
{
"sloc": {
"html": {
"string": 0,
"empty": 59,
"code": 129,
"documentation": 7
},
"python": {
"string": 362,
"empty": 1532,
"code": 2262,
"documentation": 4291
},
"restructuredtext": {
"string": 73,
"empty": 619,
"code": 1133,
"documentation": 19
},
},
"files": {
"setup.py": "python",
"lib/flowbber/main.py": "python",
"... more collected files ...": "<language detected>"
}
}
Dependencies:
pip3 install flowbber[sloc]
Usage:
[[sources]]
type = "sloc"
id = "..."
[sources.config]
directory = "{git.root}"
include = ["*"]
exclude = []
{
"sources": [
{
"type": "sloc",
"id": "...",
"config": {
"directory": "{git.root}",
"include": ["*"],
"exclude": []
}
}
]
}
Root directory to search for files.
Default: '.'
Optional: True
Schema:
{
'type': 'string',
'empty': False,
}
Secret: False
This source collects the network bandwidth the system curretly has using Speedtest.net servers.
Three metrics are collected:
ping
: Latency, in milliseconds.download
: Download speed, in bits per second.upload
: Upload speed, in bits per second.Data collected:
{
"ping": 9.306252002716064,
"download": 42762976.92544772,
"upload": 19425388.307319913
}
Dependencies:
pip3 install flowbber[speed]
Usage:
[[sources]]
type = "speed"
id = "..."
[sources.config]
runs = 2
{
"sources": [
{
"type": "speed",
"id": "...",
"config": {
"host": null,
"runs": 2
}
}
]
}
This source allows to collect timestamps in several formats.
It is recommended to include at least one timestamp source on every pipeline in order to have a unique and consistent timestamp for the whole pipeline to use.
Data collected:
{
"timezone": null,
"epoch": 1502852229,
"epochf": 1502852229.427491,
"iso8601": "2017-08-15T20:57:09",
"strftime": "2017-08-15 20:57:09"
}
{
"timezone": 0,
"epoch": 1507841667,
"epochf": 1507841667.9304831028,
"iso8601": "2017-10-12T20:54:27+00:00",
"strftime": "2017-10-12 20:54:27"
}
Dependencies:
pip3 install flowbber[timestamp]
Usage:
[[sources]]
type = "timestamp"
id = "..."
[sources.config]
epoch = true
epochf = true
iso8601 = true
strftime = "%Y-%m-%d %H:%M:%S"
{
"sources": [
{
"type": "timestamp",
"id": "...",
"config": {
"timezone": null,
"epoch": true,
"epochf": true,
"iso8601": true,
"strftime": "%Y-%m-%d %H:%M:%S"
}
}
]
}
Specify the timezone for which the timestamp should be calculated. If None
is provided (the default), the timestamp will be calculated using the local
timezone (current time). Use 0
for a UTC timestamp, and a +/-12 integer for
any other timezone.
Note that this doesn’t affect the epoch
or epochf
timestamps, as those
values are always in POSIX time which is the number of seconds since the Epoch
(1970-01-01 UTC) not counting leap seconds.
Default: None
Optional: True
Schema:
{
'type': 'integer',
'nullable': True,
'max': 12,
'min': -12,
}
Secret: False
Include seconds since the EPOCH, as integer.
Default: True
Optional: True
Schema:
{
'type': 'boolean',
}
Secret: False
Include seconds since the EPOCH, as float.
Default: False
Optional: True
Schema:
{
'type': 'boolean',
}
Secret: False
This source allows to collect information about the current user.
Data collected:
{
"uid": 1000,
"user": "kuralabs"
}
Dependencies:
pip3 install flowbber[user]
Usage:
[[sources]]
type = "user"
id = "..."
{
"sources": [
{
"type": "user",
"id": "...",
"config": {}
}
]
}
This source parses and collects information from the XML generated by Valgrind’s DRD tool.
Such XML file can be generated with:
$ valgrind \
--tool=drd \
--gen-suppressions=all \
--read-var-info=yes \
--error-exitcode=1 \
--xml=yes \
--xml-file=drd.xml \
./executable
Data collected:
Important
Sadly, Valgrind’s XML format doesn’t include a field with the total number
of errors, just an array of which errors were found. A total_errors
field is injected to allow the user to easily track the evolution of the
amount of errors.
{
"protocolversion":"4"
"protocoltool":"drd"
"preamble":{
"line":[
"drd, a thread error detector",
"Copyright (C) 2006-2015, and GNU GPL'd, by Bart Van Assche.",
"Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info",
"Command: ./binary"
]
},
"pid":"10812"
"ppid":"2162",
"tool": "drd"
"args":{
"vargv":{
"exe":"/usr/bin/valgrind.bin",
"arg":[
"--tool=drd",
"--gen-suppressions=all",
"--read-var-info=yes",
"--error-exitcode=1",
"--xml=yes",
"--xml-file=drd.xml",
]
},
"argv":{
"exe":"./binary",
"arg":[]
}
}
"status":[
{
"state":"RUNNING",
"time":"00:00:00:01.603"
},
{
"state":"FINISHED",
"time":"00:00:00:48.866"
}
],
"total_errors": 1,
"error":[
{
"unique":"0x968"
"tid":"5",
"kind":"ConflictingAccess",
"what":"Conflicting load by thread 5 at 0x12cddc28 size 1"
"stack":[
{
"frame":[
{
"ip":"0xED467F",
"obj":"/home/library/binary",
"fn":"check_thread",
"dir":"/home/library",
"file":"hello.cpp"
"line":"76",
},
{
"ip":"0xED4DF2",
"obj":"/home/library/binary",
"fn":"main()",
"dir":"/home/library",
"file":"hello.cpp"
"line":"130",
},
]
},
],
"auxwhat":"Location 0x12cd1c28 is 0 bytes inside thread_data[1].valid,"
"xauxwhat":{
"text":"a global variable declared at hello_world.c:152"
"file":"hello_world.c"
"line":"152"
}
"other_segment_start":[
{
"stack":[
{
"frame":[
{
"ip":"0xED467F",
"obj":"/usr/lib/valgrind/vgpreload_drd-amd64-linux.so",
"fn":"pthread_rwlock_rdlock",
},
{
"ip":"0xED4DF2",
"obj":"/home/library/binary",
"fn":"main",
"dir":"/home/library",
"file":"hello.cpp"
"line":"245",
},
]
},
],
},
{
"stack":[
{
"frame":[
{
"ip":"0xED467F",
"obj":"/usr/lib/valgrind/vgpreload_drd-amd64-linux.so",
"fn":"pthread_mutex_unlock",
},
{
"ip":"0xED4DF2",
"obj":"/home/library/binary",
"fn":"__rwlock_rdlock",
"dir":"/home/library",
"file":"hello.cpp"
"line":"534",
},
{
"ip":"0xED4DF2",
"obj":"/home/library/binary",
"fn":"main",
"dir":"/home/library",
"file":"hello.cpp"
"line":"130",
},
]
},
],
}
],
"other_segment_end":[
{
"stack":[
{
"frame":[
{
"ip":"0xED467F",
"obj":"/usr/lib/valgrind/vgpreload_drd-amd64-linux.so",
"fn":"pthread_rwlock_rdlock",
},
{
"ip":"0xED4DF2",
"obj":"/home/library/binary",
"fn":"main",
"dir":"/home/library",
"file":"hello.cpp"
"line":"245",
},
]
},
],
},
{
"stack":[
{
"frame":[
{
"ip":"0xED467F",
"obj":"/usr/lib/valgrind/vgpreload_drd-amd64-linux.so",
"fn":"pthread_mutex_unlock",
},
{
"ip":"0xED4DF2",
"obj":"/home/library/binary",
"fn":"__rwlock_rdlock",
"dir":"/home/library",
"file":"hello.cpp"
"line":"534",
},
{
"ip":"0xED4DF2",
"obj":"/home/library/binary",
"fn":"main",
"dir":"/home/library",
"file":"hello.cpp"
"line":"130",
},
]
},
],
}
],
}
Dependencies:
pip3 install flowbber[valgrind_drd]
Usage:
[[sources]]
type = "valgrind_drd"
id = "..."
[sources.config]
xmlpath = "drd.xml"
{
"sources": [
{
"type": "valgrind_drd",
"id": "...",
"config": {
"xmlpath": "drd.xml"
}
}
]
}
This source parses and collects information from the XML generated by Valgrind’s Helgrind tool.
Such XML file can be generated with:
$ valgrind \
--tool=helgrind \
--gen-suppressions=all \
--read-var-info=yes \
--error-exitcode=1 \
--xml=yes \
--xml-file=helgrind.xml \
./executable
Data collected:
Important
Sadly, Valgrind’s XML format doesn’t include a field with the total number
of errors, just an array of which errors were found. A total_errors
field is injected to allow the user to easily track the evolution of the
amount of errors.
{
"protocolversion":"4"
"protocoltool":"helgrind"
"preamble":{
"line":[
"Helgrind, a thread error detector",
"Copyright (C) 2007-2015, and GNU GPL'd, by OpenWorks LLP et al.",
"Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info",
"Command: ./binary"
]
},
"pid":"11023"
"ppid":"2162",
"tool": "helgrind"
"args":{
"vargv":{
"exe":"/usr/bin/valgrind.bin",
"arg":[
"--tool=helgrind",
"--gen-suppressions=all",
"--read-var-info=yes",
"--error-exitcode=1",
"--xml=yes",
"--xml-file=helgrind.xml",
]
},
"argv":{
"exe":"./binary",
"arg":[]
}
}
"status":[
{
"state":"RUNNING",
"time":"00:00:00:01.593"
},
{
"state":"FINISHED",
"time":"00:00:00:58.060"
}
],
"total_errors": 1,
"error":[
{
"unique":"0x968"
"tid":"4",
"kind":"Race",
"xwhat":[
{
"text":"Possible data race during write of size 1 at 0x12CD1C28 by thread #4",
"hthreadid":"4"
}
],
"stack":[
{
"frame":[
{
"ip":"0xED467F",
"obj":"/home/library/binary",
"fn":"check_thread",
"dir":"/home/library",
"file":"hello.cpp"
"line":"76",
},
{
"ip":"0xED4DF2",
"obj":"/home/library/binary",
"fn":"main",
"dir":"/home/library",
"file":"hello.cpp"
"line":"130",
},
],
"frame":[
{
"ip":"0xED467F",
"obj":"/home/library/binary",
"fn":"thread_exists",
"dir":"/home/library",
"file":"hello.cpp"
"line":"234",
},
{
"ip":"0xED9402",
"obj":"/home/library/binary",
"fn":"main",
"dir":"/home/library",
"file":"hello.cpp"
"line":"130",
},
],
},
],
"auxwhat":"Location 0x12cd1c28 is 0 bytes inside thread_data[1].valid,"
}
],
}
Dependencies:
pip3 install flowbber[valgrind_helgrind]
Usage:
[[sources]]
type = "valgrind_helgrind"
id = "..."
[sources.config]
xmlpath = "helgrind.xml"
{
"sources": [
{
"type": "valgrind_helgrind",
"id": "...",
"config": {
"xmlpath": "helgrind.xml"
}
}
]
}
This source parses and collects information from the XML generated by Valgrind’s Memcheck tool.
Such XML file can be generated with:
$ valgrind \
--tool=memcheck \
--xml=yes \
--xml-file=memcheck.xml \
--leak-check=full \
./executable
Data collected:
Important
Sadly, Valgrind’s XML format doesn’t include a field with the total number
of errors, just an array of which errors were found. A total_errors
field is injected to allow the user to easily track the evolution of the
amount of errors.
{
"status":[
{
"state":"RUNNING",
"time":"00:00:00:00.268"
},
{
"state":"FINISHED",
"time":"00:00:00:59.394"
}
],
"ppid":"4242",
"preamble":{
"line":[
"Memcheck, a memory error detector",
"Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.",
"Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info",
"Command: ./binary"
]
},
"suppcounts":{
"pair":[
{
"name":"reachable memory from libstdc++ pool",
"count":"1"
}
]
},
"pid":"424242",
"errorcounts":null,
"protocoltool":"memcheck",
"protocolversion":"4",
"tool":"memcheck",
"total_errors": 1,
"error":[
{
"kind":"Leak_DefinitelyLost",
"stack":{
"frame":[
{
"obj":"/usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so",
"ip":"0x4C2FB55",
"fn":"calloc"
},
{
"dir":"/home/library",
"obj":"/home/library/binary",
"line":"76",
"ip":"0xED467F",
"fn":"hello_world()",
"file":"hello.cpp"
},
]
},
"xwhat":{
"leakedblocks":"1",
"text":"8 bytes in 1 blocks are definitely lost in loss record 1 of 5",
"leakedbytes":"8"
},
"tid":"1",
"unique":"0x0"
}
],
"args":{
"vargv":{
"exe":"/usr/bin/valgrind.bin",
"arg":[
"--track-origins=yes",
"--leak-check=full",
"--show-leak-kinds=all",
"--errors-for-leak-kinds=definite",
"--error-exitcode=1",
"--xml=yes",
"--xml-file=memcheck.xml",
"--suppressions=/home/library/suppressions.supp"
]
},
"argv":{
"exe":"./binary",
"arg":[]
}
}
}
Dependencies:
pip3 install flowbber[valgrind_memcheck]
Usage:
[[sources]]
type = "valgrind_memcheck"
id = "..."
[sources.config]
xmlpath = "memcheck.xml"
{
"sources": [
{
"type": "valgrind_memcheck",
"id": "...",
"config": {
"xmlpath": "memcheck.xml"
}
}
]
}