Skip to content

Commit

Permalink
Merge pull request #260 from JeffersonLab/nbrei_perftest
Browse files Browse the repository at this point in the history
Improve performance test suite
  • Loading branch information
nathanwbrei authored Nov 10, 2023
2 parents 2db622a + f861d24 commit e0392f1
Show file tree
Hide file tree
Showing 67 changed files with 233 additions and 106 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/ccpp-docker.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,5 +16,5 @@ jobs:
steps:
- name: Build and run
id: build_and_run
uses: faustus123/[email protected].0
uses: nathanwbrei/[email protected].1

2 changes: 1 addition & 1 deletion .github/workflows/ccpp-epscimac.yml
Original file line number Diff line number Diff line change
Expand Up @@ -51,4 +51,4 @@ jobs:
needs: build-n-install-jana
steps:
- name: run jana tests
run: $GITHUB_WORKSPACE/Darwin/bin/janatests
run: $GITHUB_WORKSPACE/Darwin/bin/jana-unit-tests
5 changes: 3 additions & 2 deletions .github/workflows/ccpp-linux.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ jobs:
- uses: actions/checkout@v3
- name: deps
run: |
sudo apt-get update
sudo apt-get install -y libzmq3-dev libxerces-c-dev python3-dev
- name: cmake
run: |
Expand All @@ -49,8 +50,8 @@ jobs:
$GITHUB_WORKSPACE/Linux/bin/jana -PPLUGINS=JTest -Pjana:nevents=100
- name: janatests
run: |
echo "--- Running janatests ------------------------------"
$GITHUB_WORKSPACE/Linux/bin/janatests
echo "--- Running jana-unit-tests ------------------------------"
$GITHUB_WORKSPACE/Linux/bin/jana-unit-tests
- name: BlockExample
run: |
echo "--- Running BlockExample ------------------------------"
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/ccpp-macos.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,5 +34,5 @@ jobs:
export JANA_PLUGIN_PATH=$GITHUB_WORKSPACE/Darwin/plugins
echo "--- Running JTest plugin -----------------------"
$GITHUB_WORKSPACE/Darwin/bin/jana -PPLUGINS=JTest -Pjana:nevents=100
echo "--- Running janatests ------------------------------"
$GITHUB_WORKSPACE/Darwin/bin/janatests
echo "--- Running jana-unit-tests ------------------------------"
$GITHUB_WORKSPACE/Darwin/bin/jana-unit-tests
2 changes: 1 addition & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@ add_subdirectory(src/libraries/JANA)
add_subdirectory(src/examples)
add_subdirectory(src/plugins)
add_subdirectory(src/programs/jana)
add_subdirectory(src/programs/tests)
add_subdirectory(src/programs/unit_tests)
add_subdirectory(src/programs/perf_tests)

add_subdirectory(src/python)
Expand Down
25 changes: 24 additions & 1 deletion docs/Howto.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,6 +99,29 @@ The following example shows how you would increase the verbosity of JPluginLoade
jana -Pplugins=JTest -Plog:debug=JPluginLoader,JComponentManager
```

The `JTest` plugin lets you test JANA's performance for different workloads. It simulates a typical reconstruction pipeline with four stages: parsing, disentangling, tracking, and plotting. Parsing and plotting are sequential, whereas disentangling and tracking are parallel. Each stage reads all of the data written during the previous stage. The time spent and bytes written (and random variation thereof) are set using the following parameters:

| Name | Type | Default | Description |
|:-----|:-----|:------------|:--------|
jtest:parser_ms | int | 0 | Time spent during parsing
jtest:parser_spread | int | 0.25 | Spread of time spent during parsing
jtest:parser_bytes | int | 2000000 | Bytes written during parsing
jtest:parser_bytes_spread | double | 0.25 | Spread of bytes written during parsing
jtest:disentangler_ms | int | 20 | Time spent during disentangling
jtest:disentangler_spread | double | 0.25 | Spread of time spent during disentangling
jtest:disentangler_bytes | int | 500000 | Bytes written during disentangling
jtest:disentangler_bytes_spread | double | 0.25 | Spread of bytes written during disentangling
jtest:tracker_ms | int | 200 | Time spent during tracking
jtest:tracker_spread | double | 0.25 | Spread of time spent during tracking
jtest:tracker_bytes | int | 1000 | Bytes written during tracking
jtest:tracker_bytes_spread | double | 0.25 | Spread of bytes written during tracking
jtest:plotter_ms | int | 0 | Time spent during plotting
jtest:plotter_spread | double | 0.25 | Spread of time spent during plotting
jtest:plotter_bytes | int | 1000 | Bytes written during plotting
jtest:plotter_bytes_spread | double | 0.25 | Spread of bytes written during plotting



The following parameters are used for benchmarking:

| Name | Type | Default | Description |
Expand All @@ -110,7 +133,7 @@ benchmark:threadstep | int | 1 | Thread count increment
benchmark:resultsdir | string | JANA_Test_Results | Directory name for benchmark test results


The following parameters may come in handy when doing performance tuning:
The following parameters are more advanced, but may come in handy when doing performance tuning:

| Name | Type | Default | Description |
|:-----|:-----|:------------|:--------|
Expand Down
12 changes: 6 additions & 6 deletions scripts/jana-plot-scaletest.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,15 +37,15 @@
You can control several parameters of the test using these
JANA configuration parameters:
BENCHMARK:NSAMPLES # Number of samples for each benchmark test
BENCHMARK:MINTHREADS # Minimum number of threads for benchmark test
BENCHMARK:MAXTHREADS # Maximum number of threads for benchmark test
BENCHMARK:RESULTSDIR # Output directory name for benchmark test results
BENCHMARK:THREADSTEP # Delta number of threads between each benchmark test
benchmark:nsamples # Number of samples for each benchmark test
benchmark:minthreads # Minimum number of threads for benchmark test
benchmark:maxthreads # Maximum number of threads for benchmark test
benchmark:resultsdir # Output directory name for benchmark test results
benchmark:threadstep # Delta number of threads between each benchmark test
Run "jana -c -b" to see the default values for each of these.
To use this just go into the BENCHMARK:RESULTSDIR where the
To use this just go into the benchmark:resultsdir where the
rates.dat file is and run it. Note that you must have python
installed as well as the numpy and matplotlib python packages.
Expand Down
2 changes: 1 addition & 1 deletion src/examples/UnitTestingExample/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ set (UnitTestingExample_SOURCES
add_executable(UnitTestingExample ${UnitTestingExample_SOURCES})

# Our copy of catch.hpp lives in this weird place. Maybe we should move it to ${CMAKE_SOURCE_DIR}/src/external
target_include_directories(UnitTestingExample PRIVATE ${CMAKE_SOURCE_DIR}/src/programs/tests)
target_include_directories(UnitTestingExample PRIVATE ${CMAKE_SOURCE_DIR}/src/programs/unit_tests)

target_link_libraries(UnitTestingExample Tutorial_plugin jana2)
set_target_properties(UnitTestingExample PROPERTIES PREFIX "" OUTPUT_NAME "UnitTestingExample")
Expand Down
66 changes: 43 additions & 23 deletions src/libraries/JANA/CLI/JBenchmarker.cc
Original file line number Diff line number Diff line change
Expand Up @@ -5,47 +5,54 @@
#include "JBenchmarker.h"

#include <JANA/Utils/JCpuInfo.h>
#include <JANA/JLogger.h>

#include <fstream>
#include <cmath>
#include <sys/stat.h>
#include <iostream>

JBenchmarker::JBenchmarker(JApplication* app) : m_app(app) {

m_max_threads = JCpuInfo::GetNumCpus();
m_logger = JLoggingService::logger("JBenchmarker");
m_logger = app->GetService<JLoggingService>()->get_logger("JBenchmarker");

auto params = app->GetJParameterManager();

params->SetParameter("NEVENTS", 0);
// Prevent users' choice of events from interfering with everything
params->SetParameter("jana:nevents", 0);
// Prevent users' choice of nevents from interfering with everything

params->SetDefaultParameter(
"BENCHMARK:NSAMPLES",
"benchmark:nsamples",
m_nsamples,
"Number of samples for each benchmark test");

params->SetDefaultParameter(
"BENCHMARK:MINTHREADS",
"benchmark:minthreads",
m_min_threads,
"Minimum number of threads for benchmark test");

params->SetDefaultParameter(
"BENCHMARK:MAXTHREADS",
"benchmark:maxthreads",
m_max_threads,
"Maximum number of threads for benchmark test");

params->SetDefaultParameter(
"BENCHMARK:THREADSTEP",
"benchmark:threadstep",
m_thread_step,
"Delta number of threads between each benchmark test");

params->SetDefaultParameter(
"BENCHMARK:RESULTSDIR",
"benchmark:resultsdir",
m_output_dir,
"Output directory name for benchmark test results");

params->SetParameter("NTHREADS", m_max_threads);
params->SetDefaultParameter(
"benchmark:copyscript",
m_copy_script,
"Copy plotting script to results dir");

params->SetParameter("nthreads", m_max_threads);
// Otherwise JApplication::Scale() doesn't scale up. This is an interesting bug. TODO: Remove me when fixed.
}

Expand All @@ -55,17 +62,23 @@ JBenchmarker::~JBenchmarker() {}

void JBenchmarker::RunUntilFinished() {

LOG_INFO(m_logger) << "Running benchmarker with the following settings:\n"
<< " benchmark:minthreads = " << m_min_threads << "\n"
<< " benchmark:maxthreads = " << m_max_threads << "\n"
<< " benchmark:threadstep = " << m_thread_step << "\n"
<< " benchmark:nsamples = " << m_nsamples << "\n"
<< " benchmark:resultsdir = " << m_output_dir << LOG_END;

m_app->SetTicker(false);
m_app->Run(false);

// Wait for events to start flowing indicating the source is primed
for (int i = 0; i < 5; i++) {
std::cout << "Waiting for event source to start producing ... rate: " << m_app->GetInstantaneousRate()
<< std::endl;
LOG_INFO(m_logger) << "Waiting for event source to start producing ... rate: " << m_app->GetInstantaneousRate() << LOG_END;
std::this_thread::sleep_for(std::chrono::milliseconds(1000));
auto rate = m_app->GetInstantaneousRate();
if (rate > 10.0) {
std::cout << "Rate: " << rate << "Hz - ready to begin test" << std::endl;
LOG_INFO(m_logger) << "Rate: " << rate << "Hz - ready to begin test" << LOG_END;
break;
}
}
Expand All @@ -74,7 +87,7 @@ void JBenchmarker::RunUntilFinished() {
std::map<uint32_t, std::pair<double, double> > rates; // key=nthreads val.first=rate in Hz, val.second=rms of rate in Hz
for (uint32_t nthreads = m_min_threads; nthreads <= m_max_threads && !m_app->IsQuitting(); nthreads += m_thread_step) {

std::cout << "Setting NTHREADS = " << nthreads << " ..." << std::endl;
LOG_INFO(m_logger) << "Setting nthreads = " << nthreads << " ..." << LOG_END;
m_app->Scale(nthreads);

// Loop for at most 60 seconds waiting for the number of threads to update
Expand Down Expand Up @@ -102,14 +115,13 @@ void JBenchmarker::RunUntilFinished() {
rates[nthreads].first = avg; // overwrite with updated value after each sample
rates[nthreads].second = rms; // overwrite with updated value after each sample

std::cout << "nthreads=" << m_app->GetNThreads() << " rate=" << rate << "Hz";
std::cout << " (avg = " << avg << " +/- " << rms / sqrt(N) << " Hz)";
std::cout << std::endl;
LOG_INFO(m_logger) << "nthreads=" << m_app->GetNThreads() << " rate=" << rate << "Hz"
<< " (avg = " << avg << " +/- " << rms / sqrt(N) << " Hz)" << LOG_END;
}
}

// Write results to files
std::cout << "Writing test results to: " << m_output_dir << std::endl;
LOG_INFO(m_logger) << "Writing test results to: " << m_output_dir << LOG_END;
mkdir(m_output_dir.c_str(), S_IRWXU | S_IRWXG | S_IROTH | S_IXOTH);

std::ofstream ofs1(m_output_dir + "/samples.dat");
Expand All @@ -134,12 +146,20 @@ void JBenchmarker::RunUntilFinished() {
}
ofs2.close();

copy_to_output_dir("${JANA_HOME}/bin/jana-plot-scaletest.py");

std::cout << "Testing finished. To view a plot of test results:" << std::endl << std::endl;
std::cout << " cd " << m_output_dir << std::endl;
std::cout << " ./jana-plot-scaletest.py" << std::endl << std::endl;
m_app->Quit();
if (m_copy_script) {
copy_to_output_dir("${JANA_HOME}/bin/jana-plot-scaletest.py");
LOG_INFO(m_logger)
<< "Testing finished. To view a plot of test results:\n"
<< " cd " << m_output_dir
<< "\n ./jana-plot-scaletest.py\n" << LOG_END;
}
else {
LOG_INFO(m_logger)
<< "Testing finished. To view a plot of test results:\n"
<< " cd " << m_output_dir << "\n"
<< " $JANA_HOME/bin/jana-plot-scaletest.py\n" << LOG_END;
}
m_app->Stop();
}


Expand Down
1 change: 1 addition & 0 deletions src/libraries/JANA/CLI/JBenchmarker.h
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ class JBenchmarker {
unsigned m_thread_step = 1;
unsigned m_nsamples = 15;
std::string m_output_dir = "JANA_Test_Results";
bool m_copy_script = true;

public:
explicit JBenchmarker(JApplication* app);
Expand Down
2 changes: 2 additions & 0 deletions src/libraries/JANA/Services/JLoggingService.h
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,8 @@ class JLoggingService : public JService {
for (auto& s : groups) {
m_local_log_levels[s] = JLogger::Level::TRACE;
}
// Set the log level on the parameter manager, resolving the chicken-and-egg problem.
params->SetLogger(get_logger("JParameterManager"));
}

JLogger get_logger() {
Expand Down
10 changes: 6 additions & 4 deletions src/libraries/JANA/Services/JParameterManager.cc
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@
// Subject to the terms in the LICENSE file found in the top-level directory.

#include "JParameterManager.h"
#include "JLoggingService.h"

#include <vector>
#include <string>
Expand All @@ -16,7 +15,10 @@ using namespace std;

/// @brief Default constructor
JParameterManager::JParameterManager() {
m_logger = JLoggingService::logger("JParameterManager");
// Set the logger temporarily, until the JLoggingService figures out the correct log level
m_logger.show_classname = true;
m_logger.className = "JParameterManager";
m_logger.level = JLogger::Level::INFO;
}

/// @brief Copy constructor
Expand Down Expand Up @@ -111,7 +113,7 @@ void JParameterManager::PrintParameters(bool show_defaulted, bool show_advanced,

// If all params are set to default values, then print a one line summary and return
if (params_to_print.empty()) {
LOG << "All configuration parameters set to default values." << LOG_END;
LOG_INFO(m_logger) << "All configuration parameters set to default values." << LOG_END;
return;
}

Expand Down Expand Up @@ -157,7 +159,7 @@ void JParameterManager::PrintParameters(bool show_defaulted, bool show_advanced,
}
std::ostringstream ss;
table.Render(ss);
LOG << "Configuration Parameters\n" << ss.str() << LOG_END;
LOG_INFO(m_logger) << "Configuration Parameters\n" << ss.str() << LOG_END;
}

/// @brief Access entire map of parameters
Expand Down
3 changes: 3 additions & 0 deletions src/libraries/JANA/Services/JParameterManager.h
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,7 @@ class JParameter {
inline void SetIsAdvanced(bool isHidden) { m_is_advanced = isHidden; }
inline void SetIsUsed(bool isUsed) { m_is_used = isUsed; }
inline void SetIsDeprecated(bool isDeprecated) { m_is_deprecated = isDeprecated; }

};

class JParameterManager : public JService {
Expand All @@ -77,6 +78,8 @@ class JParameterManager : public JService {

~JParameterManager() override;

inline void SetLogger(JLogger logger) { m_logger = logger; }

bool Exists(std::string name);

JParameter* FindParameter(std::string);
Expand Down
8 changes: 4 additions & 4 deletions src/plugins/JTest/JTestDisentangler.h
Original file line number Diff line number Diff line change
Expand Up @@ -27,10 +27,10 @@ class JTestDisentangler : public JFactoryT<JTestEventData> {
void Init() override {
auto app = GetApplication();
assert (app != nullptr);
app->GetParameter("jtest:disentangler_bytes", m_write_bytes);
app->GetParameter("jtest:disentangler_ms", m_cputime_ms);
app->GetParameter("jtest:disentangler_bytes_spread", m_write_spread);
app->GetParameter("jtest:disentangler_spread", m_cputime_spread);
app->SetDefaultParameter("jtest:disentangler_ms", m_cputime_ms, "Time spent during disentangling");
app->SetDefaultParameter("jtest:disentangler_spread", m_cputime_spread, "Spread of time spent during disentangling");
app->SetDefaultParameter("jtest:disentangler_bytes", m_write_bytes, "Bytes written during disentangling");
app->SetDefaultParameter("jtest:disentangler_bytes_spread", m_write_spread, "Spread of bytes written during disentangling");

// Retrieve calibration service from JApp
m_calibration_service = app->GetService<JTestCalibrationService>();
Expand Down
9 changes: 6 additions & 3 deletions src/plugins/JTest/JTestMain.cc
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,12 @@ void InitPlugin(JApplication *app){
app->Add(new JFactoryGeneratorT<JTestDisentangler>());
app->Add(new JFactoryGeneratorT<JTestTracker>());

// Demonstrates attaching a CSV writer so we can view the results from any JFactory
app->SetParameterValue<std::string>("csv:dest_dir", ".");
app->Add(new JCsvWriter<JTestTrackData>());
bool write_csv = false;
app->SetDefaultParameter("jtest:write_csv", write_csv);
if (write_csv) {
// Demonstrates attaching a CSV writer so we can view the results from any JFactory
app->Add(new JCsvWriter<JTestTrackData>());
}

// Demonstrates sharing user-defined services with our components
app->ProvideService(std::make_shared<JTestCalibrationService>());
Expand Down
Loading

0 comments on commit e0392f1

Please sign in to comment.