Replies: 3 comments 10 replies
-
Before announcing this, I had already implemented the hardware for all the major building blocks, except for the MIDI component. I am currently using the Hz pitch detection system in various consulting applications. Note that, for practical reasons, I'm using the MCU’s SAR ADCs, which are perfectly suitable for pitch detection, though not of audio-grade quality for use as an audio interface, as you mentioned. To be clear, this is not an audio interface project, nor should it be for practical reasons. Including such functionality would, in my opinion, unnecessarily complicate the design and significantly increase the cost. |
Beta Was this translation helpful? Give feedback.
-
Yes, I agree with the practical reasons. In fact, the project like it's proposed, it will be the practical way to build MIDI controllers based on physical String vibration, like an alternative to keyboards. With one device like this, the development of different kinds of MIDI controllers would be easy for everyone. I'm fully on this because of that. But, this discussion is a way to explore and explain the ground base hardware blocks. And it's good to maintain certain independence, but if we look to your Infinity project, we have the same initial blocks probably, MCU's SAR ADCs, and same Pitch detection software. We can say that we send Analog signals to one device, (Infinity) and first ADC and MCU, in parallel we send analog signals to another device (Hz to MIDI) and second ADC and MCU, and in paralell we send the same analog signals again to an external device (Interface) with their own ADC and CODEC in this time. Writed like this, it sounds like not very logical, but it's true that this is still the best practical way to do it. I understand it. Probably it's more simple and cheap in fact. We know that an interface is composed of a lot of different blocks (normally it has Preamps, ADC's, DCA's , CODEC, Audio Outs, MIDI IN/OUT, etc...) It's a GENERAL device.It looks like a practical device, but for me it's a very NOT a practical device (for ONE musical instrument). Of course, ideally, talking about instruments like guitar, the target is to have only one ADC just after the pickups, and the Audio Grade quaility it's not these days a big issue. The problem comes after the ADC, because it's needed a protocol (CODEC) in order to send the digital TDM data from the ADC's in some comprehensible way to everyone. The advantage is clear, the software of Pitch Detection runs only once, and the Metadata could be used anywhere at any time (Infinity, MIDI conversion, Digital PostProcessing, etc...). But, don't take this too seriously, because I know that this is not achievable in an easy way, and it implies a complete paradigm change. For the moment, it's good to focus on the project of Hz to MIDI in order to create a simple device that could be easily implemented in any instrument. It's almost there. I will create another discussion about the MIDI component that is pending. |
Beta Was this translation helpful? Give feedback.
-
In a related note: it would be very cool to see this functionality as an add on to the Nexus. I've been shopping around quite a bit for a compact interface that can take 6-12 inputs. Most of what I've found is either too large, doesn't have enough of the same type of input (1/8" TRS preferable), too expensive or just has a lot of stuff I won't use. The closest thing I could find was this, which looks promising, but it's not out yet. |
Beta Was this translation helpful? Give feedback.
-
I will start with some discussion. The big question that I have here is related with the main blocks of the Hardware. I think that we have Analog Audio Signals (of multiple kinds), that we want to convert to MIDI2.
But, MIDI is entirely a Digital realm, and that could be considered a Big Block itself. Also, conceptually, it receives Digital Audio Channel of a given quality, and basically with the Hz Pitch Detection Algorithm like main Software, it will generate MIDI data.
Initially, is always needed the presence of ADC circuits in some form. That's the other Big Block on this. It could be reduced to a minimum in order to send the sampling data to the algorithm directly and operates the result, that means that is integrated on the other Block and it's closed hardware/software.
But, in my opinion, it's possible to separate clearly both BIG parts, and create Transmission system of the Digital Audio Channels using some protocol.
The problem of this approach is that on this form, the Project becomes 2 independent Projects, and that's the first Discussion that I want to launch. What's your opinion about this?
This is also related with what's needed to be embedded?
Thanks, and I agree that any opinion would be valuable.
Beta Was this translation helpful? Give feedback.
All reactions