• Re: Is binkp/d's security model kaputt?

    From tenser@21:1/101 to Oli on Fri Sep 3 08:16:24 2021
    On 02 Sep 2021 at 09:42a, Oli pondered and said...

    IMHO binkd/binkp offers lots of pseudo security and several security and usability pitfalls. Are there any good workarounds or do we need a binkp/2.0?

    What you really need is an HTTPS-based transport.

    Part of the issue is that there is the protocol (which is superfluous)
    and the implementation(s), which vary widely in terms of quality and robustness. All of the points you raise with the protocol are obviated
    by using HTTPS instead: mutual authentication, secure transport, etc;
    also compression, parallelization, transfer resumption, checksumming,
    etc.

    I propose an exchange where a client connects to a server, GET's a list
    of articles offered by the server from a generic "offer" resource. The
    offer list includes a resource identifying each article. The client
    retrieves whatever it's interested in via a normal HTTP GET against the corresponding resource.

    The client indicates the disposition of everything it was offered by
    posting to a generic "ack" resource; the body of the post is basically
    the same list but with a disposition for each article: "ack" (received),
    "nack" (neither received nor wanted) or "defer" (will try again later).
    The default for an article missing from the list is "defer". Note that
    simply completing a successful GET is not the same as an "ack"; the
    requesting side may have some difficulty saving the article, etc, so it
    must wait for an "ack" before removing the article from a list bound
    for the requester.

    The inverse of this would also be done: an HTTP "POST" to the offer
    resource; the server resource to the POST would contain a body (this is unusual, but allowed by the protocol) of a list of articles to send
    and resources to send them to: the client would then execute HTTP "PUT" requests for each article indicated by the server. Finally, it would
    issue a "GET" against the "ack" resource to retrieve the disposition
    of each article it offered to the server, as above.

    A text-based serialization for article data using something like JSON
    would also be nice.

    --- Mystic BBS v1.12 A46 2020/08/26 (Linux/64)
    * Origin: Agency BBS | Dunedin, New Zealand | agency.bbs.nz (21:1/101)
  • From tenser@21:1/101 to Oli on Sat Sep 4 02:24:16 2021
    On 03 Sep 2021 at 11:16a, Oli pondered and said...

    The Fidonet standards are a convoluted mess.

    Not only that, they're not as efficient as people think. There's
    a lot of wasted space in .PKT (space for fields that are never filled
    in), and the need to record every node that's seen a message doesn't
    seem scalable. USENET solved this by including a routing path as
    articles transited the network; this mean that one could cheaply
    detect loops when communicating with peers.

    We have the message as the central
    building block. I wouldn't touch the message format, because that would break compatibility and would lead to a different network.

    I thought about these problems a bit when I wrote ginko, and became
    convinced that the real solution was to serve legacy systems at the
    edge. For backbones and hubs, use new formats with a standard
    canonicalization and checksumming for duplicate detection and article identification, but only translate to/from legacy formats when
    communication with legacy software.

    Everything else can easilyI
    be changed. We can use another transmission protocol, just create a nodelist flag (or use DNS SRV records). We don't have to use PKT files (their not even a
    standard) for transmission. We can get rid of the weird and limited BSO. Tossing / routing could be handled differently ...

    Honestly? The whole hunk of poo ought to be tossed and re-architected.
    Using the things we've improved on in the last 40 years will actually
    simplify the whole mess, making it easier to move to IoT devices and
    so on.

    --- Mystic BBS v1.12 A46 2020/08/26 (Linux/64)
    * Origin: Agency BBS | Dunedin, New Zealand | agency.bbs.nz (21:1/101)
  • From tenser@21:1/101 to acn on Sat Sep 4 02:25:28 2021
    On 03 Sep 2021 at 12:16p, acn pondered and said...

    I think the key feature of BinkP is the compatibility with existing programs.

    This is easily handled by an adapter layer at the edge.

    --- Mystic BBS v1.12 A46 2020/08/26 (Linux/64)
    * Origin: Agency BBS | Dunedin, New Zealand | agency.bbs.nz (21:1/101)
  • From tenser@21:1/101 to Oli on Sat Sep 4 02:28:35 2021
    On 03 Sep 2021 at 11:55a, Oli pondered and said...

    But it would be just another step to the complete HTTPification of the internet. Maybe just switch to the Internet Mail format ...

    You say that as if it's a bad thing, though I wouldn't use
    the RFC822-descended mail formats. But Fidonet (and all
    extant FTN networks) have depended on the Internet for decades;
    they've just used hobbyist protocols designed by amateurs that
    are fragile and poorly conceived. By using something standard,
    they could actually take advantage of infrastructure to be
    more robust, performant and secure...probably simpler, too.

    --- Mystic BBS v1.12 A46 2020/08/26 (Linux/64)
    * Origin: Agency BBS | Dunedin, New Zealand | agency.bbs.nz (21:1/101)
  • From tenser@21:1/101 to Oli on Sat Sep 4 06:04:40 2021
    On 03 Sep 2021 at 05:41p, Oli pondered and said...

    What or where is the edge in a p2p network. And why is there always a tendency to centralize the shit out of FTNs?

    I'd define the edge as anywhere that legacy systems contact the network.
    As for centralizing...that's part of the design of FTN; it's very
    hierarchical and there's non-trivial overhead to setting up a hub.

    I think with an HTTP-based protocol, it becomes much easier to actually
    make it a mesh. Any node that's running the HTTP service can, in
    principle, peer with any other node ... much like USENET.

    --- Mystic BBS v1.12 A46 2020/08/26 (Linux/64)
    * Origin: Agency BBS | Dunedin, New Zealand | agency.bbs.nz (21:1/101)
  • From Avon@21:1/101 to apam on Wed Sep 22 16:36:12 2021
    On 22 Sep 2021 at 02:07p, apam pondered and said...

    Plus, I don't really know the best way to do the transmission, I was thinking just using POST with a node number and password and having the server respond with json messages in the response.

    I've been reading about another dev who is interested in using NNCP as a tool for getting messages from A to B

    He writes

    [snip]

    If you haven't heard of it, NNCP [1] is to UUCP approximately what ssh is to rsh/telnet. NNCP is asynchronous, delay-tolerant for fire-and-forget secure reliable files, file requests, Internet mail (and now NEWS) and commands transmission. All packets are integrity checked, end-to-end encrypted, explicitly authenticated by known participants public keys. Onion encryption
    is applied to relayed packets. Each node acts both as a client and server, can use push and poll behaviour model. NNCP can operate over a lot of
    transports: Internet, USB sticks, tapes, CD-ROMs, ssh, Dropbox, etc.

    So basically it's UUCP for the modern world. I've used NNCP for everything from automated git repo synchronization to hundreds-of-GB ZFS backup streams.

    And I now intend to offer Usenet feeds to interested people that would like to receive them over NNCP. The setup is easier than with UUCP, the environment
    is more secure, and the approach is so similar that it needs only a tiny bit
    of glue to drop in to INN in place of UUCP.

    [snip]

    Not sure if this thinking or approach is of any use as to what you are
    thinking of tinkering with but I can approach said gentleman and invite him
    to fsxNet and/or pass on some contact details to you via email if you felt there was any interest. I suspect he may be interested in what you both are kicking around too. :)

    --- Mystic BBS v1.12 A46 2020/08/26 (Linux/64)
    * Origin: Agency BBS | Dunedin, New Zealand | agency.bbs.nz (21:1/101)
  • From Oli@21:3/102 to apam on Wed Sep 22 10:41:25 2021
    apam wrote (2021-09-22):

    do you think JSON is the way to go for stored / transmitted
    messages?

    IMHO:

    1.) JSON sucks.
    (No idea why I pitched the idea to use JSON instead of XML to the CouchDB guys)

    2.) It's especially stupid as a format for encoding mails. It sucks for 8-bit text that is not UTF-8. You have to escape everything and/or do charset translation.

    3.) You cannot transport binary data without encoding it to base64 or something similar first.

    Of course you can encode the header fields in JSON and get the message body as a binary blob via another HTTP request. Maybe just use a variation of JMAP (https://jmap.io/spec-mail.html)?


    I still don't see the appeal in using text based protocols and interchange formats.

    4.) JSON sucks for config files too.

    ---
    * Origin: 1995| Invention of the Cookie. The End. (21:3/102)