Telcos Deny Trying To Turn FCC's Open Network Diagnostics Into A Closed, Proprietary Affair

from the well-of-course-they-are dept

The FCC has been working with M-Lab to measure basic network diagnostics using an open source solution, providing public information about internet network performance. This seems like a good thing… though you can see why not everyone would like data public about the performance of their networks. Over the weekend, a warning went up that the telcos are pushing the FCC to stop using M-Lab and switch to their own ISP-managed diagnostics tools. Vint Cerf is raising the alarm about this:

Recently, the FCC measurement program has backed sharply away from their commitment to transparency, apparently at the bidding of the telcos in the program. The program is now proposing to replace the M-Lab platform with only ISP-managed servers. This effectively replaces transparency with a closed platform in which the ISPs — whose performance this program purports to measure — are in control of the measurements. This closed platform would provide the official US statistics on broadband performance. I view this as scientifically unacceptable.

For the health of the Internet, and for the future of credible data-based policy, the research community must push back against this move.

The FCC keeps insisting that it’s committed to openness — but all too frequently seems to give in to telco demands. So this warning is concerning.

For what it’s worth, the telcos are claiming that Cerf is overreacting. In a response to his call for action, Verizon’s David Young responded that there’s nothing to see here, and that M-Lab and the telco efforts have co-existed and can continue to co-exist going forward.

Vint breathlessly suggests that the FCC is now backing away from this openness “at the bidding of the telcos” and claims the program is proposing to replace the M-Lab platform with only ISP-managed servers. THIS IS FALSE. ISPs have made no such request of the FCC nor has the FCC proposed to eliminate use of M-Lab’s servers.

What has been proposed is that, in addition to continuing to use the data collected via the M-Lab servers, the FCC and SamKnows may also rely on the ISP provided servers that have been in use since the beginning of the project. These ISP-provided servers meet the specifications required by SamKnows as do the M-Labs servers. In fact, it was only because of the presence of these non-M-Lab, ISP-donated servers, that SamKnows was able to identify problems with an M-Lab server that was affecting the results of the tests being conducted. M-Labs did not identify this server problem on their own. It was only fixed when SamKnows brought the issue to their attention. By the way, this problem forced the FCC to abandon a month’s worth of test data, extend the formal test period and delay production of their report. Later, another M-Lab server location had transit problems that again affected results. This was the second M-Labs-related server problem in two months and once again, it was SamKnows, using the ISP-provided servers as a reference who identified the problem and brought it to M-Labs attention.

As with many such disputes, the reality may be somewhere in between the two claims here. It seems like Cerf’s fear is that by establishing the telcos’ servers on equal footing with the M-Labs’ open setup, it opens the door to replacing the M-Labs’ efforts and then potentially locking up the data. Young is correct that the openness is mainly due to FCC policy at this point, but that policy is dependent on the current leadership of the FCC, which could change. At the very least, it would be nice to see a stated commitment to keeping the information open on an ongoing basis, so that there isn’t any need to worry going forward.

Filed Under: , , , , , ,
Companies: verizon

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Telcos Deny Trying To Turn FCC's Open Network Diagnostics Into A Closed, Proprietary Affair”

Subscribe: RSS Leave a comment
12 Comments
art guerrilla (profile) says:

i am a sam knows participant...

…and while i appreciate the service they are doing, i am not 100% certain *their* measurements are either correct, or are not being spoofed by my ISP (who i despise, but I HAVE NO CHOICE)…
to wit: starting at just before xmas 2011, our 3 Mbs DSL was *almost* unusable (and *was* -in fact- unusable for -you know- crazy stuff like watching videos or listening to music online) for almost 6 freaking MONTHS…
needless to say, calling our ISP resulted in nothing but lies and bullshit (and NOW they say we NEVER called during this 6 month period, the lying bastards!)…
the monthly report they gave me during this time showed the EXTREME variable speed, but didn’t reflect that we were getting 1/10th to 1/20th the speed during our ‘normal’ usage time : from after-work-o-clock, to midnight…
sure, i bet if you measured at 3-4 in the morning, the speed was *somewhat* better; but for 90% of the time, IT SUCKED…
in any event, either they are not measuring ‘real’ performance, are taking random samples which didn’t reflect our crappy service, or the ISP was spoofing the connection, who knows…
but -you know- putting the foxes in charge of the henhouse is always a good idea…
art guerrilla
aka ann archy
eof

Anonymous Coward says:

I’m surprised that the ISPs haven’t learned anything from SOPA. It can be a great distraction.

What Verizon’s David Young should have said when confronted was “Look over there! They are trying to sneak in SOPA again!” While everyone turns to look, he should drop a smoke bomb and let out an evil chuckle while running all the way to the bank.

=P

ECA (profile) says:

Comments

1. sign-in isnt working, not for me anyway..
2. that funny bar on the bottom is stupid.

Ok,
For those that understand a few things about BENCHMARK programs..and how MANY corps have inserted their OWN code to bypass or MOD the program to work BEST on their OWN CARDS.

Then comes the idea of a CORP offering you to USE a certain SPEED program to test their SITE..
there are many things to SEE/TEST when you test a site, and connections.
OS-LAG
SITE-LAG
How many JUMPS-LAG
SYSTEM-LAG
Even your video card can add LAG..as windows WAITS for your video to DO SOMETHING before it decides to keep connecting.(fun isnt this)
LAG is a general term. Different programs TEST in different ways also. from JUST testing from your NET card to another NET CARD, is very quick. TESTING a PROGRAM, transfer and render, and then RETURN that program is more thorough. AND TESTS MORE THEN ping from 1 machine to another.
I wont even get into TRAFFIC monitoring but certain GROUPS, which can also ADD to your lag times..

For those of us OLDER then dirt, we remember some of the OLD programs that DID something, in a straight forward fashion and gave us DETAILS and information we could use that was TRUTHFUL. and in a way would tell us WHERE the problems were.

art guerrilla (profile) says:

Re: Re: Comments

not to speak for him, but i *think* his point is, cpu, gpu, other hardware and software companies have been gaming ‘benchmarks’ *for-freaking-ever*…
it would hardly be surprising if ISPs rigged their benchmarks too…

gpu manufs went (and prob still do) to EXTREME lengths to try and game the various popular graphics benchmarks…
…and it worked ! they would beat the other guys by reverse-engineering the benchmark code, and figuring out how they could trick it, anticipate it, or otherwise game the testing software/hardware…
the point being -made in the concurrent article about leahy’s cameo, and the subsequent private showing that wasn’t a gift ’cause they gamed it- *whatever* ‘laws’ (how quaint), ‘rules’, ‘regulations’, ‘guidelines’, ‘by-laws’, or other strictures we mere 99% *attempt* to emplace upon our betters, is ONLY worth the enforcement we can engender…

if we can’t enforce (even weak-tea laws), then laws are all but meaningless… in fact, *worse* than meaningless, because they offer the *appearance* of lawfulness, when there is none…

harsh laws for us 99%, with draconian enforcement; and squishy, malleable, hardly-worth-mentioning ‘laws’ for the 1%, and those unenforced, at that !
i am certain that is a sure-fire recipe for a stable society…

art guerrilla
aka ann archy
eof

schulzrinne (profile) says:

FCC take on story

Yesterday, Vint Cerf distributed an open letter regarding concerns about the Measuring Broadband America measurement infrastructure. We share the objectives of the letter writers that ?Open data and an independent, transparent measurement framework must be the cornerstones of any scientifically credible broadband Internet access measurement program.? Unfortunately, the letter claims: ?Specifically, that the Federal Communications Commission (FCC) is considering a proposal to replace the Measurement Lab server infrastructure with closed infrastructure, run by the participating Internet service providers (ISPs) whose own speeds are being measured.? This is false.

The FCC is not considering replacing the Measurement Labs infrastructure. As part of a consensus-based discussion in the Measurement Collaborative, a group of public interest, research and ISP representatives, we have discussed how to enhance the existing measurement infrastructure to ensure the validity of the measurement data. Any such enhancements would be implemented solely to provide additional resiliency for the measurement infrastructure, not to replace existing infrastructure. Any data gathered would be subject to the same standards of data access and openness.

We look forward to continue to work with all participants in a process that has provided American consumers and the research community with network performance data of an unmatched scale and scientific rigor. We appreciate the contributions of all participants, in particular Measurement Labs, to this effort.

Henning Schulzrinne
CTO, FCC

Dave (profile) says:

Network Testing

IMHO, the only effective test for web performance would be a measurement made every five minutes for a period of one week, between two points on the network. This would be repeated for every major node, for each ISP, in every city in the US, on identical off-the-shelf equipment running identical open-source software (assuming multiple tests were run at the same time). Might get a little expensive for the tester, and take years, but we’d at least have valid data.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...