Panako
dejavu
Panako | dejavu | |
---|---|---|
2 | 15 | |
175 | 6,325 | |
- | - | |
4.0 | 0.0 | |
5 months ago | 27 days ago | |
Java | Python | |
GNU Affero General Public License v3.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Panako
-
Show HN: Pyzam, Shazam for DJs and Mixtapes in Python
Hello, really glad to see project like this popping up. I have few questions as I was working on something similar few years ago:
1. I did some development myself for a "Track Discovery for Djs"[1] project in this space of "dj music recognition" and I am wondering how are you able to handle mixtapes and dj mixes when there is a significant element of sound manipulation/distortion applied, like pitch/tempo + various effects? In my tests this totally confused the algorithms which were not designed to handle such cases.
2. Can you share which algorithm have you implemented for this project? I did read most of the research papers in this space and my preferred solution was to build upon https://github.com/JorenSix/Panako which I did.
In the space of "minimal microhouse techno" type of genre where there are often similar rhythm patterns or even tracks build up using same sample packs it proved to be more difficult to have reliable results than not.
I was investigating how Spotify and other market leaders can do track recognition and they do train ML models on the same track which has applied 100+ various different effects...
Curious to hear your thoughts...
[1] - https://rominimal.club
-
Identification of all usages of OSTs in Made in Abyss (S1)
Using neural networks seems complicated, did you tried audio fingerprinting? I have been using this audio fingerprinting library to power this anime song synchronization script. You can check Panako and dejavu too.
dejavu
- Audio Fingerprinting and Recognition in Python
-
Contacting Collectors or Creating API to help with searching
This doesn't seem hard, you can use something like this to dwoanload the songs: https://stackoverflow.com/a/27481870/6151784 and something like this to calculate how much they match: https://github.com/worldveil/dejavu The question is would you create a (dedicated) server to do your work? Or your own pc? You could also create a very simple page where someone would paste you a YouTube profile URL and you would check all songs of this URL. Also to have a db and save information about the matching and which youtube profiles have alsready been checked. Something like that could work.
-
Tiny bit of experience but need to compile a Github program. What is the best video / resource to learn to do this quickly?
If you read the installation.md file it clearly states that it has only been tested on UNIX systems, so you might be on your own trying to get it to wor in windows.
- Help needed with school project
-
Identification of all usages of OSTs in Made in Abyss (S1)
Using neural networks seems complicated, did you tried audio fingerprinting? I have been using this audio fingerprinting library to power this anime song synchronization script. You can check Panako and dejavu too.
- Dejavu – Audio fingerprinting and recognition algorithm
-
fingerprinting sections of audio from file
I want to say these few seconds match these few seconds from a different audio track. Using dejavu raw has overhead I do not need/want and hence I've been fiddling around with the fingerprint script. When modifying the global variables I can get better hits or worse hits, I will admit that even after reading there recommended article and many other sources, I can't find some good explanations about the mathematics behind the filtering after the specgram has been applied. As far as a I am aware we first apply filters to find/make fine points across the spectrogram after that we only check the distance between points along the time axis not the frequency or a hypotenuse (weird).
- Some information and advice about DDoS, from someone who was there during #opPayback
- List of resources
-
Uploading an audio dataset into a database for comparison
I used a repo called https://github.com/worldveil/dejavu to compare audio hashed fingerprints and distinguish the difference between them.