leaping

By leapingio

Leaping Alternatives

Similar projects and alternatives to leaping

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better leaping alternative or higher similarity.

leaping reviews and mentions

Posts with mentions or reviews of leaping. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-03-25.
  • FLaNK AI Weekly 25 March 2025
    30 projects | dev.to | 25 Mar 2024
  • Show HN: Leaping – Debug Python tests instantly with an LLM debugger
    3 projects | news.ycombinator.com | 22 Mar 2024
    Oof, I'm sorry to hear that - I don't think we had any Django projects in the set of projects we were testing this out on. I just filed an issue here and hopefully fix it asap - https://github.com/leapingio/leaping/issues/2
  • Show HN: Leaping – Open-source debugging with LLMs
    1 project | news.ycombinator.com | 27 Feb 2024
    Show HN: Leaping - Open-source debugging with LLMs

    Hi HN! We’re Adrien and Kanav. We met at our previous job, where we spent about a third of our life combating a constant firehose of bugs. In the hope of reducing this pain for others in the future, we’re working on automating debugging.

    We started by capturing information from running applications to then ‘replay’ relevant sessions later. Our approach for Python involved extensive monkey patching: we’d use OpenTelemetry-style instrumentation to hook into the request/response lifecycle, and capture anything non-deterministic (random, time, database/third-party API calls, etc.). We would then run your code again, mocking out the non-determinism with the captured values from production, which would let you fix production bugs with the local debugger experience. You might recognize this as a variant of omniscient debugging. We think it was a nifty idea, but we couldn’t get past the performance overhead/security concerns.

    Approaching the problem differently, we thought - could we not just grab a stack trace and sort of “figure it out” from there? Whether that’s possible in the general case is up for debate – but we think that eventually, yes. The argument goes as follows: developers can solve bugs not because they are particularly clever or experienced (though it helps), but rather because they are willing to spend enough time coming up with increasingly informed hypotheses (“was the variable set incorrectly inside of this function?”) that they can test out in tight feedback loops (“let me print out the variable before and after the function call”). We wondered: with the proper context and guidance, why couldn’t an LLM do the same?

    Over the last few weeks, we’ve been working on an approach that emulates the failing test approach to debugging, where you first reproduce the error in a failing test, then fix the source code, and finally run the test again to make sure it passes. Concretely, we take a stack trace, and start by simply re-running the function that failed. We then report the result back to the LLM, add relevant source code to the context window (with Tree-sitter and LSP), and prompt the AI for a code change that will get us closer to reproducing the bug. We apply those changes, re-run the script, and keep looping until we get the same bug as the original stack trace. Then the LLM formulates a root cause, generates a fix, we run the code again - and if the bug goes away, we call it a day. We’re also looking into letting the LLM interact with a pdb shell, as well as implementing RAG for better context fetching. One thing that excites us about generating a functioning test case with a step-by-step explanation for the fix is that results are somewhat grounded in reality, making hallucinations/confabulations less likely.

    Here’s a 50 second demo of how this approach fares on a (perhaps contrived) error: https://www.loom.com/share/a54c981536a54d3c9c269d8356ea0d51?sid=aeafd2d1-9b86-43ad-83a6-b1062aa1bb50

    We’re working on releasing a self-hosted Python version in the next few weeks on our GitHub repo: https://github.com/leapingio/leaping (right now it’s just the demo source code). This is just the first step towards a larger goal, so we’d love to hear any and all feedback/questions, or feel free to shoot me an email at [email protected]!

  • A note from our sponsor - InfluxDB
    www.influxdata.com | 9 May 2024
    Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →

Stats

Basic leaping repo stats
4
247
2.9
about 1 month ago

leapingio/leaping is an open source project licensed under MIT License which is an OSI approved license.

The primary programming language of leaping is Python.

Popular Comparisons


Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com