Description

We propose the development of a dashboard to visualize the impact of changes in the Amplification Factor on Balancer Stable-Like Pools. This dashboard should provide insights into how changes in the Amplification Factor can affect: 1. pool composition i.e. proportion of tokens; 2. relative price between pairs; and 3. liquidity/market depth. Hopefully, it will help Balancer Maxis and Pool Creators to better understand the impacts of changes they make to the Amplification Factor, ultimately leading to better outcomes for all users of the platform - betters prices for traders should attract more volume and thus generate more fees to LPs.

Luiza and Pedro (bleu's devs) worked on a similar tool last year as an exercise for learning more about Balancer. We couldn't make a rigorous review of the dashboard and, back then, developed it using Streamlit (a Python package). ZenDragon also developed Excel sheets to simulate Stable Pool params and provide assistance to protocols looking to create/deploy their own pools.

We're requesting this Grant to integrate the features from both the dashboard bleu built with Streamlit and the sheets ZenDragon developed with Excel into tools.balancer.blue - reviewing the study done then and to support developing a web interface to make it easier to extract value from that study.

Value-Added

Educate users on Stable Math. The Amp Factor Dashboard will provide valuable insights to Balancer Maxis and Pool Creators about the impacts of changes to the Amplification Factor. Overall, the dashboard can increase transparency and awareness of how changes in this param can impact the performance of this type of pool.

Deliverables

Milestones

  1. Dashboard design & StableMath research (2 weeks)
    1. StableMath research (~1/2 week)
      1. This indeed has changed a bit with ZD providing the Sheets - but just to recap, bleu also worked on a StableMath simulator. Even though we have ZD's sheets and bleu's Python app, we still have to code a lot of math now in TypeScript. I agree that having worked on it before and having ZD's sheets to compare reduces the lift, but also adds a new.
      2. Deliverable: docs containing a prioritized set of charts the dashboard should support.
    2. Dashboard design (~1 1/2 week)
      1. We've been creating a kind of design system after working on Pool Metadata and Internal Manager, but this dashboard requires a few other components we still have to create e.g. inputs, sliders, and mainly charts. We're happy to share the Figma with the Grants committee and gather your feedback before going to the front-end implementation.
      2. Deliverable: Figma file with designs for the dashboard itself and also reusable components.
  2. Front-end Development (4 weeks)
    1. Front-end implementation (1 1/2 weeks)
      1. Basically replicating the Figma on the Next.js app and creating the new components mentioned before e.g. chart and slider.
      2. Deliverable: the Next.js app with the basic front-end implementation, should follow Figma designs, but doesn't require the charts to be working.
    2. Subgraph integration (1/2 week)
      1. Integrate with Balancer's Subgraph to import pool params - needed for the feature of simulating amp factors changes on existent pools.
      2. Deliverable: users should be able to paste a poolId and import its params (tokens, balances, amp, etc) - again, charts don't need to be working yet.
    3. StableMath implementation (2 weeks)
      1. Here we want to both code the required StableMath functions on TS and also integrate it with inputs/sliders and the multiple charts - ZD's Excel presents a chart for impacts on price, but he really liked the 2 others we built on Python which shows the impacts on pool's composition and liquidity depth.
      2. To ensure everyone is on the same page, we're proposing charts for users to play with the amp factor and analyze: 1. impact on tokens prices; 2. effects in pool's composition i.e. proportion of tokens; 3. liquidity/market depth.
      3. Deliverable: proper implementation of the charts, everything should be usable and users can simulate the charts previously prioritized on the 1st milestone.
  3. Review & Deployment (1 week)
    1. Review & Deployment (1 week)
      1. Deployment isn't a problem as we've already automated it on other projects. Main objective here is to do insensitive reviewing and testing on the app and compare results vs bleu's Python app and ZD's sheets to ensure correctness. For sure this is not the last thing to be done and we're always reviewing front-end development, but leaves us a bit of a buffer to do a proper final review.
      2. Deliverable: app hosted at tools.balancer.bleu, a demo video on how to use it, and final review from bleu's team.

Grant Size

$10k paid in BAL

To be completely transparent, we've been asking around $1500 per week of work - you folks can confirm on other Grants we applied for, it's basically 4 weeks = $6k and 6 weeks = $9k. We estimated around 7 weeks for this one so $10.5k which we rounded down to $10k.

When applying for that batch of Grants we reduced a bit the value of this project (and also the Pool Verifier) because we thought we were adding too much buffer due to scope uncertainty. This time though, after breaking down the milestones (as we presented above) we found that there's a lot of work to be done.

Moreover, we didn't know before this Grant could expand to a Pools Simulator, which requires a bit of generalization in terms of components to be re-used in the future.

Closing Remarks

During the BCN onsite I (Fábio) met Xeonus and ZenDragon and we decided bleu would work on this Grant - Xeonus has a lot in its place with the Aura Analytics app and ZenDragon will focus on Governance projects. We also agreed that it would be beneficial to refactor the Weighted Math simulator (IL and PI calculators on balancer.tools) and have it on the same app as the Stables - we want to first deliver the StableSwap simulator but will take this into consideration when building the app so supporting the Weighteds can be simple in the future 😄