bsv
epanet-js
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
bsv
-
Do You Know How Much Your Computer Can Do in a Second?
grep is the beginning, not the end. it’s a great performance baseline to meet, and then beat[1]. computers are insanely fast!
the startups using grep on aws are undercutting those doing slower things on aws. this must be why aws architects never talk about grep.
1. https://github.com/nathants/bsv
-
Generic dynamic array in 60 lines of C
awesome! i do the same thing for arrays[1] and maps[2].
stuff like this is great when you are trying to find performance ceiling of some workload. literally nothing to hide.
1. https://github.com/nathants/bsv/blob/master/util/array.h
2. https://github.com/nathants/bsv/blob/master/util/map.h
-
Consider Using CSV
i had a lot of fun exploring the performance ceiling of csv and csv like formats. turns out binary encoding of size prefixed byte arrays is fast[1].
csv is just a sequence of 2d byte arrays. probably avoid if dealing with heterogeneous external data. possibly use if dealing with homogeneous internal data.
https://github.com/nathants/bsv
- Big Data file formats
-
GitHub - SixArm/usv: USV: Unicode Separated Values
i like this idea, and do something similar: https://github.com/nathants/bsv
- Ask HN: Have you created programs for only your personal use?
epanet-js
-
Ask HN: Did you change your software architecture due to monetary constraints?
At the start up I work at [0], we use an open source library I developed to run hydraulic models of water networks in JavaScript [1].
A hydraulic model may be between 1-10MB and the simulation results can end up being 100+MB of time series data.
Other vendors with proprietary engines have to scale up servers to run their simulation engineers and will store and serve up results from a database.
Having everything done locally means we only have to store a static file and offload the simulation to the client.
Because we've architected it this way our hosting costs are low and users generally have faster access to results (assuming they're running a moderately decent machine)
[0] https://qatium.com/
[1] https://github.com/modelcreate/epanet-js
-
Ask HN: How did you find your current job?
I'm a civil engineer and I wrote an open source library that compiled a C library to javascript for my own personal projects - epanet-js [1]
A water utility in Spain spun off a start up called Qatium [2] and they used my library as the engine of their simulations and asked me to join.
[1] https://github.com/modelcreate/epanet-js
[2] https://qatium.com/
-
Ask HN: Which personal projects got you hired?
I created a handful of application around water engineering/modelling [1], plus an open source library to run the simulations in javascript [2].
A water utility in Spain spun off a start up to create a similar web based water modelling application and they used my open source library.
They approached me and I joined them and have been able to maintain the open source library as part of my role.
[1] https://github.com/modelcreate/epanet-js#featured-apps
-
Ask HN: Have you created programs for only your personal use?
I work as a water engineer, specializing in building hydraulic models so water utilities can simulate their network.
A big part of that is calibrating them which can be time consuming, you look through hundreds of options. I create a few web based apps to help grind through these tasks but ultimately they were for my own use as a consultant to close projects quickly.
I did pull out the engine as its own open source library for other to use, and that ended up helping me get my current role where I can now maintain it and be paid at the same time.
https://github.com/modelcreate/epanet-js
- [OC] Water flowing through a utilities water network
-
Ask HN: What is your current side-project?
https://github.com/modelcreate/epanet-js
I've built a few open source apps and few other little projects to help automate my workflow.
There are only a handful of providers of modelling software, most are commercial and one recently sold to Autodesk for $1B.
Not sure I'll convince the industry to change but I'm enjoying tinkering around and making my own small difference.
What are some alternatives?
parquet-go - pure golang library for reading/writing parquet file
epanet2toolkit - An R package for calling the Epanet software for simulation of piping networks.
tad - A desktop application for viewing and analyzing tabular data
treebender - A HDPSG-inspired symbolic natural language parser written in Rust
ndjson.github.io - Info Website for NDJSON
zenbot-sim-runner - A sim run batch aggregator / automator for Zenbot. Eases the process of backtesting and subsequent analysis of results.
s4 - super simple storage service + data local compute + shuffle
place
notebook - private notebook end to end encryption
okta-aws-cli-assume-role - Okta AWS CLI Assume Role Tool
tiny-snitch - an interactive firewall for inbound and outbound connections
m4b-tool - m4b-tool is a command line utility to merge, split and chapterize audiobook files such as mp3, ogg, flac, m4a or m4b