The intent is definitely that the data should "roll over", i.e. as new data comes in, earlier data points are dropped as necessary to keep column lengths under max_size. From your I'm not sure if that is what you are seeing or not. Maybe some scenarios will help elucidate the expected behavior:
* First pull gets 600 items, CDS columns have length 600.
* Next pull gets 600 items, this plus the first data is greater than max_size, so the first 200 points are discarded. CDS columns have length 1000
* Next pull gets 1300 items, this is greater than max_size all by itself. All the previous date is discarded as well as 300 points from this pull to keep column lengths at 1000
If that's the kind of thing you are seeing, I'd say that is expected. If you mean that you are at exactly max_size, and get one more item and that causes everything to be blown away somehow, then that is not expected, and a GH issue (with a complete code sample to reproduce and more detailed info about exactly how much data is coming in and in what order) would be appropriate.