[bisq-network/bisq] Reduce initial request size (#4233)

Florian Reimair notifications at github.com
Tue Sep 1 11:56:45 UTC 2020


A bit late, I know, but I can be online again, local infrastructure is fixed, and I take my time to comment on your discussions. Although I would suggest a call sometime, just to get somewhere.

### ad rough idea
pruning data does not solve the problem.
- it lowers usability
- we fix a piece of code that is at its limits with throwing even more code on it, thus increasing complexity instead of decreasing it
- it adds a lot of complexity down the road as we have to handle missing data throughout Bisq
- once Bisq gets more successful, we need to start pruning data a lot more rigorously (one month, 2 weeks?)

### ad another way to do it
- we cannot use time (as you guys already realized)
- the only thing we control is releases. someone throws a number of messages into a bucket and put a stamp on it. Then we ship this bucket. No issues with later additions, not even incomplete buckets are an issue because the network keeps the databases synced anyways.
- `The checkpoint terminology reveals the centralisation issue.` That is correct in a couple of ways:
  - `reveals`, because the issue is there already, cannot be discussed away
  - we have to deal with it eventually
  - by leaving it untouched, we do not move and cannot deal with "it"
  - we cannot solve all and everything now

### ad backwards compatibility
- seed nodes communicate using the old mechanics. nothing changed there. Because of all the reasons you mentioned and more.

### ad `if any of the DaoState data is lost`
- DaoState data isn't touched by this PR at all, and although the massive data store causes real issues, it does not have anything to do with the 3/4 problem

### ad loosing any data in general
- no data is deleted, no pruning, no shuffling around.
- the only thing that happens is to move data from the live database once a new bucket arrives (through app update) and thus, reduce the Disk IO load

### ad `Having discussed it and considering the risks it seems much safer to only focus on the tradestats to begin with. `
- focusing on tradestats is possible if you guys so desire
- however, I do not understand why there is this fear of loosing data because of this PR. Yet, pruning data - ie. intentionally loosing data, seems to be a way forward?
- I do however, fully understand, and have seen it happen, that we loose data once in a while because the current implementation cannot cope with the sheer amount of data anymore (because eg. it takes too long to write to disk and files get clipped [which by the way is addressed by this PR as well, at least for non-dao stuff])
- I do fully understand that we deny users access to Bisq which do not own a powerful dev machine
- I do fully understand that we deny users access to Bisq which do not have decent broadband internet access

### and finally
I am fully aware that there is a lot of technical dept in these parts of the code. And the dept needs to go. But we cannot eliminate this dept in one step (well, actually, we might be able to pull it off, but that would be a complete rewrite of the p2p module and all the risks that comes with touching this part of the code). And it seems like we are discussing stuff over and over again, not coming to a consensus for months. Meanwhile, the actual request size has grown beyond 6MB by now and we are denying more and more users access to the bisq network (and thereby loosing them).

The actual bad thing here is that the code stayed as is, no technical dept added, non removed, nothing changes. Just a massive waste of time.

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/bisq-network/bisq/pull/4233#issuecomment-684798196
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.bisq.network/pipermail/bisq-github/attachments/20200901/d763ef88/attachment.html>


More information about the bisq-github mailing list