-
Notifications
You must be signed in to change notification settings - Fork 172
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Humanizer #627
base: master
Are you sure you want to change the base?
Humanizer #627
Conversation
Hi! Concerning the missing feedback: There are not a lot of developers who are contributing to Hydrogen and all of the "original" developers who wrote the code which you want to discuss are no longer active. For my part, when i've time to work on hydrogen, i'm mostly spending my time on bugfixes and keeping things alive :-/ About the swing knob: Have you checked if this has an effect in older versions like 0.9.4 or 0.9.5? I do like the idea in general to give the user more control about the randomization techniques, but haven't checked your branch yet for the user interface part. About the implementation of the Randomizer: The singleton pattern is quite over-used in Hydrogen :) W I'm trying to avoid it for new classes if it is possible since it is hard to tell which instances do rely on each other. If the randomizer is ony used in the Hydrogen class, there is nothing against making it a private member of that class and managing that object in the engines constructor/destructor. |
Ah, i forgot to add one thing.. It might help to put some screenshots and descriptions of the randomize functionality in the pull request to discuss ideas with users and other developers. I don't have any idea what those noise colors are affecting and where the difference is 😄 |
Hey, Yes, I'm aware that Hydrogen is maintained but not under active development. That's no problem since I intend to do all the work on the humanizer instead of handing some unfinished code to others ;). But as someone who not did part in the development of the software yet I do not feel myself entitled to change core parts of the code and ask for their adaption without discussing the changes first. In addition this is the first code I ever wrote in C++. As someone used to interpreted languages like R, Python or Lua implementing all these features in C++ is a little bit complicated. Therefore I'm glad to receive some guidance and have no problem in changing the current implementation. E.g. the singleton pattern was heavily inspired by the Looper object in order to make my code work at all. I'm not quite sure what you mean with an effect in older versions but the reading and writing of the new options specifying the noise color and the swing parameter do not cause problems. For earlier versions the noise colors are just ignored when loading a more recent .h2song file and the swing parameter is falling back to its default values since it is not present in the .h2song file anymore. Importing older .h2song with the updated humanizer works the same way. The main thing I want to improve in my contribution are the correlations in the fluctuations generated by the humanizer. Sounds quite abstract and theoretic, I know :). Let me phrase it like this: using Gaussian white noise, as Hydrogen used beforehand, for every single note the audio engine draws a new random number to slightly alter both its velocity/pitch and the onset. But this implies that all the fluctuations are completely independent of each other. This would simulate a drummer who is not perfectly on time and immediately forgets about the note she played as soon as the stick hits its target. At one point she is slightly behind the beat in the base drum, in the next beat she is in front of it and so on. Everything completely uncorrelated. This is most probably not who humans do operate and, indeed, some researches did already find during the 80s such fluctuations to behave like pink noise. Now if the drummer is behind the beat, there is a tendency she will be behind the beat at the next beat as well. But just a tendency. It's not something like we are storing the fluctuation added during the last beat and add a random number to it. It has a more complex and "fractal" dependency. The later is what I did implement. Now the user can choose between white and pink colored noise via a dropdown menu. At least in my perception the humanizer indeed produces more human sounds. |
That said, I'm not really done yet. At least on my side everything is working and it might be save to merge it into master but I will change a lot of stuff again, including the graphics of the mixer line. What I intend to do:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, looks OK to me.
// Composed generator with a multiplicative linear | ||
// congruential generator as the outer method and a 64-bit | ||
// Xorshift method as the inner one. | ||
unsigned long long int white_uniform_int64(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could use Doxygen for "documentation" comments.
@@ -76,7 +76,9 @@ Song::Song( const QString& name, const QString& author, float bpm, float volume | |||
, __is_loop_enabled( false ) | |||
, __humanize_time_value( 0.0 ) | |||
, __humanize_velocity_value( 0.0 ) | |||
, __swing_factor( 0.0 ) | |||
// Colors: 0 - white noise; 1 - pink noise |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could use enum or, preferably, enum class for storing noise color.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I also like the idea of using an enum class to store the color of the noise instead of just an integer value. But the color needs to be saved to the .h2song file. Therefore I would vote against using such a class since this would require some additional overhead in converting it both in the MixerLine and the LocalFileMng/Song instance.
What do you think?
uniformStateInt64( 4101842887655102017LL ) { | ||
__instance = this; | ||
// Generate a random seed | ||
unsigned long long int initialSeed = std::rand(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
std::rand()
returns int
which is narrower than unsigned long long int
, so all high bits are initialized to 0.
BTW, you might consider using pseudo random number generation utilities int C++ standard library.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, indeed. But since it is just the seed for a RNG, which will be initialized ones and (hopefully) never finishes its period, I do not consider this a problem.
Considering the C++ library: I just finished reading the chapter on random numbers in the third edition of Numerical Recipes by Press and the authors claimed that a lot of research did happened in this field and one should not trust the generators implemented in standard libraries, especially not std::rand(). And indeed, all the references in the Pseudo-random number generation library predate the discovery of the now standard statistical tests for checking the quality of RNGs. In addition both the best algorithm described by Press and all the combined or composed RNG he suggested to use are not present in this library. This was actually one of the main reason I started looking at the humanizer.
Hi, I did a quick review. For me it looks good - I have suggested some cosmetic changes. |
I try to incorporate all the suggested changes end of the week and also do some squashing of the commits I did on the way. |
42293e2
to
b26ebd8
Compare
ef8e38f
to
ec73701
Compare
I just took a look at the humanizer and there a couple of things I do not understand and which are probably not intended to work the way they are doing right now. First of all both the humanization of the velocity and the pitch in *hydrogen.cpp* feature a negative bias of the random variable. Firstly, a random generated and afterwards a constant value determined by the get_humanize_velocity_value() and the get_random_pitch_factor() is subtracted. Is there a reason for this bias? In randomizing the pitch the random variable, which is drawn using the getGaussian() function, is only scaled using the fMaxPitchDeviation value but not with the random pitch factor of the corresponding instrument. The latter is only introduced as a negative bias. This does not seem right. For a better readbility I renamed the input variable of the getGaussian() function into "variance" since this is what it actually is: the variance of the distribution the random variable is drawn from. Also I would propose to use the get_humanize... values as a input to the getGaussian function so it is obvious that the variance will be scaled using the knots for the humanizer. There are two variables *fMaxPitchDeviation* and *nMaxTimeHumanize* defined in the source and only used for the scaling. I expect these are intended to bound the random variable. Since we are generating Gaussian white noise, the random variable is (in theory) able to take any number between minus and plus infinity. I used the two variables instead to bound the random variable to assure plausible fluctuations. But this might messed up the scaling applied here. I just looked at the humanizer so far and lack the overall knowledge of the internal working of Hydrogen. Could someone take a look at the scaling here? Last but not least I have a question: What exactly is the *swing* intended to accomplish? It is implemented as a constant and positive additive term to the offset of a note. No randomization or whatsoever takes place here. correct scaling of the humanizer inputs Using integration by substitution on the second moment of a distribution one can verify that the scaling of a random variable by a factor `a` multiplies its standard deviation by `a` and its variance by `a^2`. `var(a*x)=a^2var(x)` Therefore, the inputs of the knots of the humanizers have to be squared in order to have the same experience as beforehand. In other words, up to now the behavior was as follows: draw a Gaussian random number with a variance of 0.2 and scale the process by a factor controlled via the rotator buttons, which will scale the variance of the process by the square of their value. I would vote to streamline this behavior by introducing the right range and scaling to the rotator button itself and to display the variance of the white noise in the info region in Hydrogen after moving the button. verbose the maximal humanization variance The factors 0.2 or 0.3 in the calls to `getGaussian` in the humanization parts do act as an upper bound for the variance of the Gaussian white noise. By assigning those value to an approriately names variable their role becomes much more apparent.
in small window sizes the phrase "Set humanize velocity parameter" needs some time to run through notification area of the mixer. Thus, one is not able to see the actual value when still rotating the knot. Instead, I moved the displayed value to the front of the display message to visible at all times.
The Rotary knob for the **swing** was removed (because I still do not see any use of this variable and no one of the Hydrogen team responds to my questions). Instead two `QComboBox`s have been added via `LCDCombo` objects. They will be used as drop down menus to select the color of the corresponding noise. Those colors can be indiviually chosen for both the humanization of the velocity and the onset. The background picture of the master strip of the MixerLine was changed as well to nicely inegrate the changes.
The Hydrogen song does now contain the additional functions `get_humanize_time_color`, `set_humanize_time_color` and the equivalent for the velocity. The internal state is stored in the `__humanize_time_color` and `__humanize_velocity_color` intt variables. Both states are written and loaded to file properly now. Every last remaining instance of the swing factor had been removed. first draft of the noise generating functions fixing bugs in auxiliary impl of humanizer The random seed did have the wrong type causing the random number generator to spit `nan`. The factor `maximalHumanizationTimeVariance` has been enlarged. Since the resulting random number will be floored, there is absolutely no point in having it set to 1.
Instead of a bunch of inline functions and dozens of global variables holding the states of the different generators of colored noise I introduced an object class `Randomizer`. It will contain interfaces to draw new random numbers as public methods and all states as private ones. To allow a better maintainance I moved all code concerning the random number generator in dedicated files `randomizer.cpp` and `randomizer.h`. The randomizer is now designed to be a proper part of Hydrogen. Each time the `Hydrogen::create_instance()` function is called upon starting up Hydrogen an instance of the Randomizer is created as well using `Randomizer::create_instance()`. During the runs of the audio engine the generated instance will be retrieved using `Randomizer::get_instance()`. This way only one seeded random number generator does exist and every random number, regardless of its color, is drawn from it. In addition the generators of the Gaussian white and white uniform noise source have been updated. - the **Gaussian white noise source** now uses both Gaussian random variables produced by the Box-Muller transformation and relies on the updated uniform white algorithm - the **uniform white noise source** was completely reworked and now the algorithm *Ranq1* of the book *Numerical recipes* by Press et al. is used. It is of higher quality and does not produce correlated noise like the `std::rand()` function. Some description has been added to the pink noise source.
the Hydrogen core backend has now been updated to ask the GUI what noise color was chosen by the user. This will affect the noise source the random increment will be drawn from.
ec73701
to
7cbc822
Compare
hi all, |
Hi @thijz , I perfectly agree. I'm a huge fan of big and comprehensive documentation. But this is just the first draft of my changes to the humanizer. E.g. in comparison to the white noise the pink one has some 'memory' of its past. But for now this memory is somewhat shared by all instruments since there is a single state of the noise generating object for all of them. Maybe this is how it works and a human keeps track of the rhythm as a whole. Or maybe our hands and feet are decoupled and it would be more appropriate to separated for all of them. Or some shared (toms) and some separate (snare, kick, hihat). Actually, since I'm not a drummer I don't have an intuition how it might work. In addition, all papers I studied so far just investigate a particular part of the drumkit (what probably would favor the separate approach). In any case, I'll try to get my hands on some drum recordings, look for the correlations myself, and fix them according to what I find. So, since there are still a number of such open questions it would imply adjusting the documentation at every step. What's the general policy for documentation of your project? Do you incorporate all changes immediately into the manual and wiki or do you wait will the next release? |
I've added a "manual" tag then we can track this. BTW, it sounds a great and funny improvement, keep up the good work @theGreatWhiteShark ! |
to more elegantly make the random number generating function of the AudioEngine available to other parts of the code as well. It looks quite empty but will likely be filled with life at some future point in time when I have time to look at hydrogen-music#627 again
Yet another try to talk about the humanizer within Hydrogen.
I reworked the humanizer, removed the
swing
knob and its counterparts in the core backend, and introduced another color for the noise.This is still work in progress but the next thing I'm going to do will come with more invasive changes to the audio engine, which will most probably not be merged.
Changes:
Randomizer
create_instance()
andget_instance()
routinesMixerLine