Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support use of net.PacketConn in dtls.Listener #570

Merged
merged 11 commits into from
Aug 27, 2023

Conversation

hasheddan
Copy link
Contributor

@hasheddan hasheddan commented Aug 21, 2023

Description

#558 added support for DTLS 1.2 Connection IDs, but because the underlying routing mechanism in pion/transport/udp only accommodates associating packets to connection by the remote address, we are not yet able to actually receive packets on the connection with which the connection ID is associated. This patchset adds a new dtls/net/udp package that we consume by default in dtls.NewListener, which allows for customized routing. It closely resembles net/transport/udp, but allows for adding alternate IDs for a single connection.

This functionality is implemented through the use of two configuration options:

  • DatagramRouter: when provided, the UDP listener will apply the function to each incoming packet, and use the returned identifier for routing.
  • ConnectionIdentifier: when provided, the UDP listener will apply the function to each outgoing packet. If an identifier is returned that is not already associated with the connection, a new entry in the routing table will be added that associates the connection to that identifier.

Note: this PR is currently in draft to demonstrate the new routing mechanism. Based on discussion on this PR, we may choose to implement some of this functionality in pion/transport rather than here. If we do move forward with the proposed functionality, we will also need to update packetio.Buffer to support sending the net.Addr alongside each packet (right now it is just hacked together by gob encoding / decoding the net.Addr with the associated packet data).

Reference issue

Fixes #568

Benchmarks

The same benchmarks, which are not necessarily the most realistic, are copied from pion/transport/packetio to pion/dtls/pkg/net. Running them on my local machine provided the following results.

pion/transport/packetio

go test ./packetio/... -bench=. -benchmem
goos: linux
goarch: amd64
pkg: github.com/pion/transport/v2/packetio
cpu: 12th Gen Intel(R) Core(TM) i7-12700H
BenchmarkBufferWR14-20       	33276788	        35.62 ns/op	 393.01 MB/s	       0 B/op	       0 allocs/op
BenchmarkBufferWR140-20      	25166851	        41.77 ns/op	3352.02 MB/s	       0 B/op	       0 allocs/op
BenchmarkBufferWR1400-20     	22657819	        49.50 ns/op	28284.15 MB/s	       0 B/op	       0 allocs/op
BenchmarkBufferWWR14-20      	33066428	        36.71 ns/op	 381.33 MB/s	       0 B/op	       0 allocs/op
BenchmarkBufferWWR140-20     	26263891	        42.61 ns/op	3285.74 MB/s	       0 B/op	       0 allocs/op
BenchmarkBufferWWR1400-20    	18707992	        62.39 ns/op	22439.13 MB/s	       0 B/op	       0 allocs/op
BenchmarkBuffer14-20         	24692239	        50.08 ns/op	 279.57 MB/s	       0 B/op	       0 allocs/op
BenchmarkBuffer140-20        	17865735	        61.53 ns/op	2275.16 MB/s	       1 B/op	       0 allocs/op
BenchmarkBuffer1400-20       	 9494719	       109.6 ns/op	12774.14 MB/s	       2 B/op	       0 allocs/op

pion/dtls/pkg/net

go test ./internal/net/... -bench=. -benchmem
goos: linux
goarch: amd64
pkg: github.com/pion/dtls/v2/internal/net
cpu: 12th Gen Intel(R) Core(TM) i7-12700H
BenchmarkBufferWR14-20       	31277917	        38.32 ns/op	 365.33 MB/s	       0 B/op	       0 allocs/op
BenchmarkBufferWR140-20      	25308450	        45.09 ns/op	3105.11 MB/s	       0 B/op	       0 allocs/op
BenchmarkBufferWR1400-20     	18748666	        60.70 ns/op	23062.71 MB/s	       0 B/op	       0 allocs/op
BenchmarkBufferWWR14-20      	30761960	        36.54 ns/op	 383.14 MB/s	       0 B/op	       0 allocs/op
BenchmarkBufferWWR140-20     	25150832	        44.51 ns/op	3145.05 MB/s	       0 B/op	       0 allocs/op
BenchmarkBufferWWR1400-20    	18885183	        62.52 ns/op	22391.09 MB/s	       0 B/op	       0 allocs/op
BenchmarkBuffer14-20         	 6214586	       177.0 ns/op	  79.08 MB/s	     448 B/op	       0 allocs/op
BenchmarkBuffer140-20        	 5931457	       203.3 ns/op	 688.79 MB/s	     530 B/op	       0 allocs/op
BenchmarkBuffer1400-20       	 3480960	       433.2 ns/op	3231.97 MB/s	    1313 B/op	       0 allocs/op

We are clearly paying a performance penalty with the new buffer strategy. However, given that we are keeping this package internal and users can opt to keep using the existing udp / packetio packages if they don't care about clients being able to update remote address, this may be acceptable for the time-being with the knowledge that we can continue improving it without breaking users.

@codecov
Copy link

codecov bot commented Aug 21, 2023

Codecov Report

Patch coverage: 84.39% and project coverage change: +0.53% 🎉

Comparison is base (f1d8b0a) 76.98% compared to head (29202df) 77.52%.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #570      +/-   ##
==========================================
+ Coverage   76.98%   77.52%   +0.53%     
==========================================
  Files          99      101       +2     
  Lines        6018     6420     +402     
==========================================
+ Hits         4633     4977     +344     
- Misses       1017     1066      +49     
- Partials      368      377       +9     
Flag Coverage Δ
go 77.54% <84.39%> (+0.53%) ⬆️
wasm 62.90% <81.08%> (+0.84%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files Changed Coverage Δ
pkg/net/net.go 50.00% <50.00%> (ø)
connection_id.go 81.48% <81.81%> (+1.48%) ⬆️
internal/net/udp/packet_conn.go 85.43% <85.43%> (ø)
listener.go 51.11% <85.71%> (+4.76%) ⬆️
internal/net/buffer.go 91.79% <91.79%> (ø)

... and 3 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

pkg/net/udp/packet_conn.go Outdated Show resolved Hide resolved
pkg/net/udp/packet_conn.go Outdated Show resolved Hide resolved
pkg/net/udp/packet_conn.go Outdated Show resolved Hide resolved
// Check to see if we are full.
if b.full {
// If so, grow AddrPacket buffer by one.
b.packets = append(b.packets[:b.write], append(make([]AddrPacket, 1), b.packets[b.write:]...)...)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the event that we receive a flood of writes without reads, this becomes quite inefficient. Compare to the grow logic of the packetio.Buffer, where we are doubling the size of the underlying byte slice until we reach a cutoff point (at which point we grow at a slower rate).

We could opt to use a similar strategy here, though we still perform a large number of allocations due to the need to allocate each AddrPacket's underlying byte slice. I have been able to bring this down to being equivalent (or less than) the number of allocations in the packetio.Buffer implementation by instead using an array with size equivalent to the UDP PacketConn MTU, which we know we will never exceed here. However, it does mean that we could be holding onto much more memory than needed. For example, if our buffer grows to size 100, the payload on all of those packets could each just be a few bytes, but the total allocated memory for payloads would be 100*MTU.

AddrPacket looks like this with the change:

type AddrPacket struct {
	addr net.Addr
	data [8192]byte
	size int
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another option here would be to use a byte slice on AddrPacket but use a capped growing strategy for it as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Chatted with @edaniels -- decision for now is to use a bytes.Buffer on AddrPacket for now. The PacketBuffer handles growing the ring buffer, but allocating the data for each packet is deferred to the bytes.Buffer. Will revisit opportunities to improve performance in the future.

@hasheddan hasheddan self-assigned this Aug 24, 2023
internal/net/buffer_test.go Outdated Show resolved Hide resolved
Comment on lines 264 to 271
t.Run("100 writes and reads", test(wr, 100, 7))
t.Run("1000 writes and reads", test(wr, 1000, 7))
t.Run("10000 writes and reads", test(wr, 10000, 7))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When we have perfect read / write match, we see identical allocation counts to the packetio.Buffer

Copy link
Contributor Author

@hasheddan hasheddan Aug 25, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is unclear to me why these are failing on the Go 1.19 / 1.20 test runs but not on the corresponding Go i386 1.19 / 1.20 test runs. Furthermore, it appears if I update the values to match the Go 1.19 / 1.20 runs, the tests still pass on the Go i386 runs. The tests pass locally on my linux/amd64 machine. I would anticipate that difference between platforms and Go versions could certainly impact allocation behavior, but I am surprised to see that changing the values to match the Go 1.19 / 1.20 results has no impact on the i386 runs 🤔

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am going to drop these for now as I'm not sure this is how we should be measuring allocations anyway. It would likely make more sense to invest in tracking benchmark regressions rather than trying to measure exact number of allocs on a few somewhat unrealistic use cases.

connection_id.go Outdated
@@ -26,3 +33,64 @@ func OnlySendCIDGenerator() func() []byte {
return nil
}
}

// cidConnResolver extracts connection IDs from incoming packets and uses them
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unit tests incoming for these resolver and identifier functions 👍🏻

Copy link
Member

@edaniels edaniels left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done with first pass!

Main thing I'm asking for is to just make it clear what's new versus copied over code from transport.

connection_id.go Outdated
if err != nil || len(pkts) < 1 {
return "", false
}
h := &recordlayer.Header{}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
h := &recordlayer.Header{}
var h recordlayer.Header

same goes for Header and MessageServerHello below to better identify this is uninitialized

}
hh := &handshake.Header{}
sh := &handshake.MessageServerHello{}
for _, pkt := range pkts {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should you skip the packet you just processed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm not sure I follow -- we get passed a datagram, then break it down into "packets" (DTLS records), then iterate through them. This gets called once per datagram.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah i misread nvm

pkg/net/net.go Outdated
}

// pconn wraps a net.Conn and implements net.PacketConn.
type pconn struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
type pconn struct {
type packetConnWrapper struct {

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call 👍🏻


connLock sync.Mutex
conns map[string]*PacketConn
connWG *sync.WaitGroup
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
connWG *sync.WaitGroup
connWG sync.WaitGroup

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for catching! This was copied over, so I wanted to check and see if there was any reason why this pattern was used or if it was just an accident. pion/transport/udp was recently copied from pion/udp to its new location. It appears that some changes were made during that process.

That commit was made well before the migration, and appears to fix a data race issue, so I believe it was erroneously dropped. I would recommend that we pull it back in, which unfortunately also means that we will need pion/udp/pkg/sync (https://github.com/pion/udp/tree/master/pkg/sync), so I'll go ahead and pull that in here. I'll also open an issue to see if we can get more clarity on whether the change was intentional, but I'd prefer to stay as true to the working functionality as possible, then make improvements in smaller batches as needed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh yeah ignore this one. i meant to delete comments that I realized were on existing code

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm actually going to opt not to pull the pion/udp/pkg/sync in here quite yet until I can get some more clarity on why it wasn't migrated (asked in pion/transport#57 (comment)). For now, I am going to just update with your suggestion to use a sync.WaitGroup rather than *sync.WaitGroup 👍🏻

select {
case c := <-l.acceptCh:
close(c.doneCh)
// If we have an alternate identifier, remove it from the connection
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can a listener be used after close? if no, does doing this matter?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you are correct that this does not matter -- we can't have an alternate identifier if this connection was never accepted (and thus never written to). However, given that this is should not be in the hot path, I might lean towards symmetry of deleting conn entries based on both remote address and identifier, but add a note that we should never have an alternate identifier established at this point. Open to pushback on that though -- what are your thoughts?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's fine to keep

// +build !js

// Package udp implements DTLS specific UDP networking primitives.
package udp
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are there any test modifications to make here to accommodate for the net.Addr return arg in Accept?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we wrap the net.PacketConn in fromPC to be able to use the upstream StressDuplex functionality (which requires ReadWriter interface to be implemented). Other than that, there are a few cases where we call ReadFrom rather than Read, but the only change there is that the returned net.Addr is ignored.

https://github.com/pion/dtls/pull/570/files#diff-e534343f6cd79310f20cbbd44311b0173d735dc765e551d73acf5bdb3a5e6024R483 is net new to test correct routing based on identifier.

if c.listener.connIdentifier != nil {
id := c.id.Load()
candidate, ok := c.listener.connIdentifier(p)
// If this is a new identifier, add entry to connection map.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can't you only have a CID set once?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, originally I checked to see if id == nil, but opted to not impose that restriction at this level given that it is a DTLS detail rather than a UDP one. We do pay a cost of calling the connIdentifier() though.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah it seems like we should then make sure we only set it once though to avoid that cost. what do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That seems reasonable to me -- if we need to make it more generic to accommodate other use cases in the future we always can. I'll leave a comment reflecting that 👍🏻

// Package udp implements DTLS specific UDP networking primitives.
// NOTE: this package is an adaption of pion/transport/udp. If possible, the
// updates made in this repository will be reflected back upstream. If not, it
// is likely that this will be moved to a public package in this repository.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be nice to list the changes here as well as in the specific places they happen in case this goes back upstream. It also helps for reviewing / history to understand what code is new versus moved over since git can't track that across repositories

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea of making the history more understandable, but I am weary of doing so by trying to describe each of the differences in comments because that feels like a recipe for becoming stale over time if more changes are made in this package. I would recommend providing a permalink to the commit in pion/transport at which this was copied and changed, such that folks can use git diff to see the exact changes that were made, as well as any subsequent changes that were made here or in pion/transport. That being said, I think expanding the current comment to describe why this was "forked" would be useful. What do you think about that strategy?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah that's very reasonable, a summary is fine!

// the incoming packet. If not set, any packet creates new conn.
AcceptFilter func([]byte) bool

// ConnectionResolver resolves an incoming packet to a connection by
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These names feel too close to me. What about <Incoming|Outgoing>ConnectionIdentifier?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm I'd like to distinguish that one is identifying a connection to send a packet to, while the other is establishing an identity for the connection itself. What would you think about PacketRouter and ConnectionIdentifier?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep that's fine

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually went with DatagramRouter to be a bit more precise.

// Increase the number of packets by 25%.
newPackets = len(b.packets) / 4
}
b.packets = append(b.packets[:b.write], append(make([]AddrPacket, newPackets), b.packets[b.write:]...)...)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

might it be more efficient and possibly easier to read to just allocate the total amount of new space and then copy over the existing packets followed by appending the new ones?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Likely so -- I'll update and run the benchmarks to see if there is meaningful impact on performance 👍🏻

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds good. if anything it feels more readable

Copy link
Contributor Author

@hasheddan hasheddan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your review @edaniels! I will get changes in ASAP, but feel free to respond to any of my commented in the mean time!

}
hh := &handshake.Header{}
sh := &handshake.MessageServerHello{}
for _, pkt := range pkts {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm not sure I follow -- we get passed a datagram, then break it down into "packets" (DTLS records), then iterate through them. This gets called once per datagram.

// Wait for a connection.
conn, err := listener.Accept()
util.Check(err)
// defer conn.Close() // TODO: graceful shutdown
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is copied from the PSK example -- I'd probably opt to address them both at once but outside of this PR.

// Increase the number of packets by 25%.
newPackets = len(b.packets) / 4
}
b.packets = append(b.packets[:b.write], append(make([]AddrPacket, newPackets), b.packets[b.write:]...)...)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Likely so -- I'll update and run the benchmarks to see if there is meaningful impact on performance 👍🏻

// Package udp implements DTLS specific UDP networking primitives.
// NOTE: this package is an adaption of pion/transport/udp. If possible, the
// updates made in this repository will be reflected back upstream. If not, it
// is likely that this will be moved to a public package in this repository.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea of making the history more understandable, but I am weary of doing so by trying to describe each of the differences in comments because that feels like a recipe for becoming stale over time if more changes are made in this package. I would recommend providing a permalink to the commit in pion/transport at which this was copied and changed, such that folks can use git diff to see the exact changes that were made, as well as any subsequent changes that were made here or in pion/transport. That being said, I think expanding the current comment to describe why this was "forked" would be useful. What do you think about that strategy?


connLock sync.Mutex
conns map[string]*PacketConn
connWG *sync.WaitGroup
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for catching! This was copied over, so I wanted to check and see if there was any reason why this pattern was used or if it was just an accident. pion/transport/udp was recently copied from pion/udp to its new location. It appears that some changes were made during that process.

That commit was made well before the migration, and appears to fix a data race issue, so I believe it was erroneously dropped. I would recommend that we pull it back in, which unfortunately also means that we will need pion/udp/pkg/sync (https://github.com/pion/udp/tree/master/pkg/sync), so I'll go ahead and pull that in here. I'll also open an issue to see if we can get more clarity on whether the change was intentional, but I'd prefer to stay as true to the working functionality as possible, then make improvements in smaller batches as needed.

select {
case c := <-l.acceptCh:
close(c.doneCh)
// If we have an alternate identifier, remove it from the connection
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you are correct that this does not matter -- we can't have an alternate identifier if this connection was never accepted (and thus never written to). However, given that this is should not be in the hot path, I might lean towards symmetry of deleting conn entries based on both remote address and identifier, but add a note that we should never have an alternate identifier established at this point. Open to pushback on that though -- what are your thoughts?

// the incoming packet. If not set, any packet creates new conn.
AcceptFilter func([]byte) bool

// ConnectionResolver resolves an incoming packet to a connection by
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm I'd like to distinguish that one is identifying a connection to send a packet to, while the other is establishing an identity for the connection itself. What would you think about PacketRouter and ConnectionIdentifier?

if c.listener.connIdentifier != nil {
id := c.id.Load()
candidate, ok := c.listener.connIdentifier(p)
// If this is a new identifier, add entry to connection map.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, originally I checked to see if id == nil, but opted to not impose that restriction at this level given that it is a DTLS detail rather than a UDP one. We do pay a cost of calling the connIdentifier() though.

// +build !js

// Package udp implements DTLS specific UDP networking primitives.
package udp
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we wrap the net.PacketConn in fromPC to be able to use the upstream StressDuplex functionality (which requires ReadWriter interface to be implemented). Other than that, there are a few cases where we call ReadFrom rather than Read, but the only change there is that the returned net.Addr is ignored.

https://github.com/pion/dtls/pull/570/files#diff-e534343f6cd79310f20cbbd44311b0173d735dc765e551d73acf5bdb3a5e6024R483 is net new to test correct routing based on identifier.

pkg/net/net.go Outdated
}

// pconn wraps a net.Conn and implements net.PacketConn.
type pconn struct {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call 👍🏻

Copy link
Contributor Author

@hasheddan hasheddan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@edaniels all comments addressed and additional unit tests have been introduced -- ready for another review!

(will clean up git history prior to merge)

Copy link
Member

@edaniels edaniels left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a few more things

listener.go Show resolved Hide resolved
ok: true,
want: string(cid),
},
"MultipleRecordMultipleConnectionID": {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we get a test where the CID is a different length?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do 👍🏻

}

// PacketBuffer is a circular buffer for network packets. Each slot in the
// buffer supports a
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

supports a what :P

})
}

func TestListenerCustomConnIDs(t *testing.T) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this test looks good but just a few things:

  1. In spawned test goroutines, you're generally not supposed to call test ending testing.T methods outside of the main goroutine (see https://pkg.go.dev/testing#F.FailNow). I'd suggest in general just having each non-main goroutine report its errors in some structured way (maybe a channel) and return early
  2. it'd be helpful to have 4, 5, and 20 be made constants so that this could be made configurable but also a little more readable in terms of how many servers/clients/extra messages there are
  3. After t.Error you should probably return
  4. Cases of p := &pkt{} should be var p pkt

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In spawned test goroutines, you're generally not supposed to call test ending testing.T methods outside of the main goroutine

@edaniels is this in reference to the t.Fatal calls in the provided datagram router and connection identifier functions? I believe all other calls in goroutines are using t.Error.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep those ones!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool! I think actually just returning "", false in those cases is likely more illustrative of real usage of the functionality. The resulting errors will show up in the checks that the clients and servers are performing. I'll update all of those t.Error calls with returns 👍🏻

@@ -532,8 +533,8 @@ func TestListenerCustomConnIDs(t *testing.T) {
var serverWg sync.WaitGroup
clientMap := map[string]struct{}{}
var clientMapMu sync.Mutex
// Start 5 servers.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just need to fix up phaseOne [5] to use the const

Copy link
Member

@edaniels edaniels left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Introduces a new net package, which defines networking interfaces used
throughout the pion/dtls code base, as well as utilities to help
consumers convert standard library net types into pion/dtls compatible
ones.

Signed-off-by: Daniel Mangum <[email protected]>
Updates unit and e2e tests to consume new utilities from the dtls net
package.

Signed-off-by: Daniel Mangum <[email protected]>
Introduce a network PacketBuffer which maintains a ring buffer of
network packets constituted of the packet payload and the remote address
from which it was received. This package borrows heavily from the bytes
based ring buffer in pion/transport/packetio.

Signed-off-by: Daniel Mangum <[email protected]>
Adds PacketBuffer unit tests.

Signed-off-by: Daniel Mangum <[email protected]>
Introduces a UDP based net.PacketListener that allows for routing UDP
datagrams based on remote address _or_ an alternate identifier. It is
configured by the DatagramRouter and ConnectionIdentifier functions,
which are provided by the caller. The former introspects outgoing
datagrams for potential connection identifiers, while the latter routes
incoming datagrams by introspecting their contents. This package borrows
heavily from pion/transport/udp.

Signed-off-by: Daniel Mangum <[email protected]>
Adds unit tests for UDP net.PacketListener.

Signed-off-by: Daniel Mangum <[email protected]>
Adds functions to route datagrams and identify connections by DTLS 1.2
Connection IDs.

Signed-off-by: Daniel Mangum <[email protected]>
Adds unit tests for CID routing functions.

Signed-off-by: Daniel Mangum <[email protected]>
Sets UDP routing functions in the default DTLS listener if a connection
ID generator is provided. Also updates to accept a dtls
net.PacketListener when a caller wishes to provide their own.

Signed-off-by: Daniel Mangum <[email protected]>
Adds an example for setting up a CID-enabled DTLS listener.

Signed-off-by: Daniel Mangum <[email protected]>
Adds an example for a client that only sends connection IDs (i.e. does
not request to received them). This is the most common scenario for DTLS
clients.

Signed-off-by: Daniel Mangum <[email protected]>
@hasheddan hasheddan merged commit 37fbc04 into pion:master Aug 27, 2023
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support net.PacketConn in DTLS Listener
2 participants