Wanted: volunteers with bandwidth/storage to help save climate data
This is a short-term (about one month) project being thrown together in a hurry...and it could use some help. I know that some of you have lots of resources to throw at this, so if you have an interest in preserving a lot of scientific research data, I've set up a mailing list to coordinate IT efforts to help out. Signup via climatedata-request@firemountain.net or, if you prefer Mailman's web interface, http://www.firemountain.net/mailman/listinfo/climatedata should work. Thanks, ---rsk
If you’re interested, there’s also a Slack team: climatemirror.slack.com You can find more info about that here: - https://climate.daknob.net/ - http://climatemirror.org/ - http://www.ppehlab.org/datarefuge Thank you for your help!
On 16 Dec 2016, at 17:58, Rich Kulawiec <rsk@gsp.org> wrote:
This is a short-term (about one month) project being thrown together in a hurry...and it could use some help. I know that some of you have lots of resources to throw at this, so if you have an interest in preserving a lot of scientific research data, I've set up a mailing list to coordinate IT efforts to help out. Signup via climatedata-request@firemountain.net or, if you prefer Mailman's web interface, http://www.firemountain.net/mailman/listinfo/climatedata should work.
Thanks, ---rsk
See also: https://twitter.com/textfiles/status/808715999042117632 https://twitter.com/textfiles/status/808922272551550976 Jason Scott@textfiles When your boss gives you the goahead to mirror 200tb of NOAA data, you run with it Don't let the fact that The Internet Archive is all over this deter you, though. Coordinate here: https://docs.google.com/spreadsheets/d/12-__RqTqQxuxHNOln3H5ciVztsDMJcZ2SVs1... Royce On Fri, Dec 16, 2016 at 7:02 AM, DaKnOb <daknob.mac@gmail.com> wrote:
If you’re interested, there’s also a Slack team: climatemirror.slack.com
You can find more info about that here:
- https://climate.daknob.net/ - http://climatemirror.org/ - http://www.ppehlab.org/datarefuge
Thank you for your help!
On 16 Dec 2016, at 17:58, Rich Kulawiec <rsk@gsp.org> wrote:
This is a short-term (about one month) project being thrown together in a hurry...and it could use some help. I know that some of you have lots of resources to throw at this, so if you have an interest in preserving a lot of scientific research data, I've set up a mailing list to coordinate IT efforts to help out. Signup via climatedata-request@firemountain.net or, if you prefer Mailman's web interface, http://www.firemountain.net/mailman/listinfo/climatedata should work.
Thanks, ---rsk
Surfing through the links - any hints on how big these datasets are? Everyone's got a few TB to throw at things, but fewer of us have spare PB to throw around. There's some random #s on the goog doc sheet for sizes (100's of TB for the landsat archive seems credible), and there's one number that destroys credibility of the sheet (100000000000 GB (100 ZB)) for the EPA archive. The other page has many 'TBA' entries for size. Not sure what level of player one needs to be to be able to serve a useful segment of these archives. I realize some of the datasets are tiny (<GB) but which ones are most important vs size (ie the win-per-byte ratio) isnt indicated. (I know its early times.) Also I hope they've SHA512'd the datasets for authenticity before all these myriad copies being flungabout are 'accused' of being manipulated 'to promote the climate change agenda' yadda. Canada: time to step up! (Cant imagine the Natl Research Council would do so on their mirror site, too much of a gloves-off slap in the face to Trump.) /kc On Fri, Dec 16, 2016 at 06:02:46PM +0200, DaKnOb said:
If you???re interested, there???s also a Slack team: climatemirror.slack.com
You can find more info about that here:
- https://climate.daknob.net/ - http://climatemirror.org/ - http://www.ppehlab.org/datarefuge
Thank you for your help!
On 16 Dec 2016, at 17:58, Rich Kulawiec <rsk@gsp.org> wrote:
This is a short-term (about one month) project being thrown together in a hurry...and it could use some help. I know that some of you have lots of resources to throw at this, so if you have an interest in preserving a lot of scientific research data, I've set up a mailing list to coordinate IT efforts to help out. Signup via climatedata-request@firemountain.net or, if you prefer Mailman's web interface, http://www.firemountain.net/mailman/listinfo/climatedata should work.
Thanks, ---rsk
-- Ken Chase - math@sizone.org Guelph Canada
We are currently working on a scheme to successfully authenticate and verify the integrity of the data. Datasets in https://climate.daknob.net/ are compressed to a .tar.bz2 and then hashed using SHA-256. The final file with all checksums is then signed using a set of PGP keys. We are still working on a viable way to verify the authenticity of files before there are tons of copies lying around and there’s a working group in the Slack team I sent previously where your input is much needed! Thanks, Antonios
On 16 Dec 2016, at 18:30, Ken Chase <math@sizone.org> wrote:
Surfing through the links - any hints on how big these datasets are? Everyone's got a few TB to throw at things, but fewer of us have spare PB to throw around.
There's some random #s on the goog doc sheet for sizes (100's of TB for the landsat archive seems credible), and there's one number that destroys credibility of the sheet (100000000000 GB (100 ZB)) for the EPA archive.
The other page has many 'TBA' entries for size.
Not sure what level of player one needs to be to be able to serve a useful segment of these archives. I realize some of the datasets are tiny (<GB) but which ones are most important vs size (ie the win-per-byte ratio) isnt indicated. (I know its early times.)
Also I hope they've SHA512'd the datasets for authenticity before all these myriad copies being flungabout are 'accused' of being manipulated 'to promote the climate change agenda' yadda.
Canada: time to step up! (Cant imagine the Natl Research Council would do so on their mirror site, too much of a gloves-off slap in the face to Trump.)
/kc
On Fri, Dec 16, 2016 at 06:02:46PM +0200, DaKnOb said:
If you???re interested, there???s also a Slack team: climatemirror.slack.com
You can find more info about that here:
- https://climate.daknob.net/ - http://climatemirror.org/ - http://www.ppehlab.org/datarefuge
Thank you for your help!
On 16 Dec 2016, at 17:58, Rich Kulawiec <rsk@gsp.org> wrote:
This is a short-term (about one month) project being thrown together in a hurry...and it could use some help. I know that some of you have lots of resources to throw at this, so if you have an interest in preserving a lot of scientific research data, I've set up a mailing list to coordinate IT efforts to help out. Signup via climatedata-request@firemountain.net or, if you prefer Mailman's web interface, http://www.firemountain.net/mailman/listinfo/climatedata should work.
Thanks, ---rsk
-- Ken Chase - math@sizone.org Guelph Canada
University Toronto's Robarts Library is hosting an all-day party tomorrow of people to surf and help identify datasets, survey and get size and details, authenticate copies, etc. fb event: https://www.facebook.com/events/1828129627464671/ /kc On Fri, Dec 16, 2016 at 06:42:46PM +0200, DaKnOb said:
We are currently working on a scheme to successfully authenticate and verify the integrity of the data. Datasets in https://climate.daknob.net/ are compressed to a .tar.bz2 and then hashed using SHA-256. The final file with all checksums is then signed using a set of PGP keys.
We are still working on a viable way to verify the authenticity of files before there are tons of copies lying around and there???s a working group in the Slack team I sent previously where your input is much needed!
Thanks, Antonios
On 16 Dec 2016, at 18:30, Ken Chase <math@sizone.org> wrote:
Surfing through the links - any hints on how big these datasets are? Everyone's got a few TB to throw at things, but fewer of us have spare PB to throw around.
There's some random #s on the goog doc sheet for sizes (100's of TB for the landsat archive seems credible), and there's one number that destroys credibility of the sheet (100000000000 GB (100 ZB)) for the EPA archive.
The other page has many 'TBA' entries for size.
Not sure what level of player one needs to be to be able to serve a useful segment of these archives. I realize some of the datasets are tiny (<GB) but which ones are most important vs size (ie the win-per-byte ratio) isnt indicated. (I know its early times.)
Also I hope they've SHA512'd the datasets for authenticity before all these myriad copies being flungabout are 'accused' of being manipulated 'to promote the climate change agenda' yadda.
Canada: time to step up! (Cant imagine the Natl Research Council would do so on their mirror site, too much of a gloves-off slap in the face to Trump.)
/kc
On Fri, Dec 16, 2016 at 06:02:46PM +0200, DaKnOb said:
If you???re interested, there???s also a Slack team: climatemirror.slack.com
You can find more info about that here:
- https://climate.daknob.net/ - http://climatemirror.org/ - http://www.ppehlab.org/datarefuge
Thank you for your help!
On 16 Dec 2016, at 17:58, Rich Kulawiec <rsk@gsp.org> wrote:
This is a short-term (about one month) project being thrown together in a hurry...and it could use some help. I know that some of you have lots of resources to throw at this, so if you have an interest in preserving a lot of scientific research data, I've set up a mailing list to coordinate IT efforts to help out. Signup via climatedata-request@firemountain.net or, if you prefer Mailman's web interface, http://www.firemountain.net/mailman/listinfo/climatedata should work.
Thanks, ---rsk
-- Ken Chase - math@sizone.org Guelph Canada
It would seem like the more copies the better, seemingly chunking this data up and using .torrent files may be a way to both (a) ensure the integrity of the data, and (b) enable an additional method to ensure that there are enough copies being replicated (initial seeders would hopefully retain the data for as long as possible)... On Fri, Dec 16, 2016 at 12:24 PM, Ken Chase <math@sizone.org> wrote:
University Toronto's Robarts Library is hosting an all-day party tomorrow of people to surf and help identify datasets, survey and get size and details, authenticate copies, etc.
fb event: https://www.facebook.com/events/1828129627464671/
/kc
We are currently working on a scheme to successfully authenticate and verify the integrity of the data. Datasets in https://climate.daknob.net/ are compressed to a .tar.bz2 and then hashed using SHA-256. The final file with all checksums is then signed using a set of PGP keys.
We are still working on a viable way to verify the authenticity of files before there are tons of copies lying around and there???s a working group in the Slack team I sent previously where your input is much needed!
Thanks, Antonios
On 16 Dec 2016, at 18:30, Ken Chase <math@sizone.org> wrote:
Surfing through the links - any hints on how big these datasets are? Everyone's got a few TB to throw at things, but fewer of us have spare PB to throw around.
There's some random #s on the goog doc sheet for sizes (100's of TB for the landsat archive seems credible), and there's one number that destroys credibility of the sheet (100000000000 GB (100 ZB)) for the EPA archive.
The other page has many 'TBA' entries for size.
Not sure what level of player one needs to be to be able to serve a useful segment of these archives. I realize some of the datasets are tiny (<GB) but which ones are most important vs size (ie the win-per-byte ratio) isnt indicated. (I know its early times.)
Also I hope they've SHA512'd the datasets for authenticity before all
myriad copies being flungabout are 'accused' of being manipulated 'to
On Fri, Dec 16, 2016 at 06:42:46PM +0200, DaKnOb said: these promote
the climate change agenda' yadda.
Canada: time to step up! (Cant imagine the Natl Research Council would do so on their mirror site, too much of a gloves-off slap in the face to Trump.)
/kc
On Fri, Dec 16, 2016 at 06:02:46PM +0200, DaKnOb said:
If you???re interested, there???s also a Slack team: climatemirror.slack.com
You can find more info about that here:
- https://climate.daknob.net/ - http://climatemirror.org/ - http://www.ppehlab.org/datarefuge
Thank you for your help!
On 16 Dec 2016, at 17:58, Rich Kulawiec <rsk@gsp.org> wrote:
This is a short-term (about one month) project being thrown together in a hurry...and it could use some help. I know that some of you have lots of resources to throw at this, so if you have an interest in preserving a lot of scientific research data, I've set up a mailing list to coordinate IT efforts to help out. Signup via climatedata-request@firemountain.net or, if you prefer Mailman's web interface, http://www.firemountain.net/mailman/listinfo/climatedata should work.
Thanks, ---rsk
-- Ken Chase - math@sizone.org Guelph Canada
-- Miano, Steven M. http://stevenmiano.com
I seriously doubt that there's going to be a witchhunt even close to as well funded as anti-torrent DMCA-wielding piracy hunters, and it's not even nearly the same as keeping a copy of wikileaks.aes, or sattelite photos of Streisand's campus, or photos of Elian Gonzales, a copy of deCSS, the cyberSitter killfile, etc ("we've been here before."). The issue will be 1000s of half copies that are from differing dates sometimes with no timestamps or other metadata, no SHA256 sums, etc etc. It's going to be a records management nightmare. Remember, all these agencies wont be shut down on Jan 20th making that the universal time-stamp date. Some of them may even be encouraged to continue producing data, possibly even cherry picked or otherwise tainted. Others will carry on quietly, without the administration noticing. Im glad some serious orgs are getting into it - U of T, archive.org, wikipedia, etc. We'll have at least a few repo's that cross-agree on progeny, date, sha256, etc. Only once jackboots are knocking on doors "where's the icecore sample data, Lebowski!" will we really have to consider the quality levels of the other repos. Not that they shouldnt be kept either, of course. Remember, this is only one piece of the puzzle. The scientists can do as much data- collecting as they want -- if the political side of the process wants to make 'mentioning climate change illegal' in state bills or other policies or department missions, it's far more effective than rm'ing a buncha datasets. http://abcnews.go.com/US/north-carolina-bans-latest-science-rising-sea-level... Nonetheless - mirror everything everywhere always... /kc On Fri, Dec 16, 2016 at 02:05:01PM -0500, Steven Miano said:
It would seem like the more copies the better, seemingly chunking this data up and using .torrent files may be a way to both (a) ensure the integrity of the data, and (b) enable an additional method to ensure that there are enough copies being replicated (initial seeders would hopefully retain the data for as long as possible)...
On Fri, Dec 16, 2016 at 12:24 PM, Ken Chase <math@sizone.org> wrote:
University Toronto's Robarts Library is hosting an all-day party tomorrow of people to surf and help identify datasets, survey and get size and details, authenticate copies, etc.
fb event: https://www.facebook.com/events/1828129627464671/
/kc
We are currently working on a scheme to successfully authenticate and verify the integrity of the data. Datasets in https://climate.daknob.net/ are compressed to a .tar.bz2 and then hashed using SHA-256. The final file with all checksums is then signed using a set of PGP keys.
We are still working on a viable way to verify the authenticity of files before there are tons of copies lying around and there???s a working group in the Slack team I sent previously where your input is much needed!
Thanks, Antonios
On 16 Dec 2016, at 18:30, Ken Chase <math@sizone.org> wrote:
Surfing through the links - any hints on how big these datasets are? Everyone's got a few TB to throw at things, but fewer of us have spare PB to throw around.
There's some random #s on the goog doc sheet for sizes (100's of TB for the landsat archive seems credible), and there's one number that destroys credibility of the sheet (100000000000 GB (100 ZB)) for the EPA archive.
The other page has many 'TBA' entries for size.
Not sure what level of player one needs to be to be able to serve a useful segment of these archives. I realize some of the datasets are tiny (<GB) but which ones are most important vs size (ie the win-per-byte ratio) isnt indicated. (I know its early times.)
Also I hope they've SHA512'd the datasets for authenticity before all
myriad copies being flungabout are 'accused' of being manipulated 'to
On Fri, Dec 16, 2016 at 06:42:46PM +0200, DaKnOb said: these promote
the climate change agenda' yadda.
Canada: time to step up! (Cant imagine the Natl Research Council would do so on their mirror site, too much of a gloves-off slap in the face to Trump.)
/kc
On Fri, Dec 16, 2016 at 06:02:46PM +0200, DaKnOb said:
If you???re interested, there???s also a Slack team: climatemirror.slack.com
You can find more info about that here:
- https://climate.daknob.net/ - http://climatemirror.org/ - http://www.ppehlab.org/datarefuge
Thank you for your help!
On 16 Dec 2016, at 17:58, Rich Kulawiec <rsk@gsp.org> wrote:
This is a short-term (about one month) project being thrown together in a hurry...and it could use some help. I know that some of you have lots of resources to throw at this, so if you have an interest in preserving a lot of scientific research data, I've set up a mailing list to coordinate IT efforts to help out. Signup via climatedata-request@firemountain.net or, if you prefer Mailman's web interface, http://www.firemountain.net/mailman/listinfo/climatedata should work.
Thanks, ---rsk
-- Ken Chase - math@sizone.org Guelph Canada
-- Miano, Steven M. http://stevenmiano.com
-- Ken Chase - math@sizone.org
On 12/16/2016 3:30 PM, Ken Chase wrote:
http://abcnews.go.com/US/north-carolina-bans-latest-science-rising-sea-level...
North Carolina is not banning science. It is banning absolutely preposterous and manipulated junk science. A 39-inch rise in the ocean levels over the next century is based on fear-mongering and junk science designed to scare politicians into increasing grant $$ from the federal government. It is not based on science. In fact, the sea levels continue to rise at the SAME TINY 2-4mm per year that they've been rising at for decades, with ZERO sign of an increase. If global warming was real and cumulative - this shouldn't even be possible, based all that we've been told over the past 20 years. Every article that states that oceans rising at alarmingly faster rates - due to global warming - either lie about or manipulate the the data... or they grab one relatively small short term spike and extrapolates from that. Meanwhile, dozens of sea-level rising predictions from so-called credible scientists have not only failed, but failed by order of magnitudes, and again, relied upon junk science. True science makes "risky predictions" and is willing to throw out the theory when that theories "risky predictions" don't come true. But I truly due hope that this collection process is successful because I hope that ALL of this (mostly) manipulated data gets recorded for posterity so that (honest) scientists a century from now can do extensive studies on how/why science became so political and manipulated as they look back on the first few decades of the 21st century's slide into a strong long-term cooling trend, due to long term cyclical sun cycles. This is not a victim-less crime. This manipulation of the data by global warmongers harms people because is miscalculates resources and damages the economy. Does that mean we should spew toxic waste into rivers or streams or spew smog into the air? Of course not. But global warming and CO2 being a cause of it... and "oceans rising" has MUCH junk science behind it. Still, I hope this data is preserved. The truth will win out in the long term. (as is already starting to happen) -- Rob McEwen
This started as a technical appeal, but: https://www.nanog.org/list 1. Discussion will focus on Internet operational and technical issues as described in the charter of NANOG. ... 6. Postings of political, philosophical, and legal nature are prohibited. ... -- Hugo Slabbert | email, xmpp/jabber: hugo@slabnet.com pgp key: B178313E | also on Signal On Fri 2016-Dec-16 16:35:36 -0500, Rob McEwen <rob@invaluement.com> wrote:
On 12/16/2016 3:30 PM, Ken Chase wrote:
http://abcnews.go.com/US/north-carolina-bans-latest-science-rising-sea-level...
North Carolina is not banning science. It is banning absolutely preposterous and manipulated junk science.
A 39-inch rise in the ocean levels over the next century is based on fear-mongering and junk science designed to scare politicians into increasing grant $$ from the federal government. It is not based on science.
In fact, the sea levels continue to rise at the SAME TINY 2-4mm per year that they've been rising at for decades, with ZERO sign of an increase.
If global warming was real and cumulative - this shouldn't even be possible, based all that we've been told over the past 20 years.
Every article that states that oceans rising at alarmingly faster rates - due to global warming - either lie about or manipulate the the data... or they grab one relatively small short term spike and extrapolates from that.
Meanwhile, dozens of sea-level rising predictions from so-called credible scientists have not only failed, but failed by order of magnitudes, and again, relied upon junk science. True science makes "risky predictions" and is willing to throw out the theory when that theories "risky predictions" don't come true.
But I truly due hope that this collection process is successful because I hope that ALL of this (mostly) manipulated data gets recorded for posterity so that (honest) scientists a century from now can do extensive studies on how/why science became so political and manipulated as they look back on the first few decades of the 21st century's slide into a strong long-term cooling trend, due to long term cyclical sun cycles.
This is not a victim-less crime. This manipulation of the data by global warmongers harms people because is miscalculates resources and damages the economy. Does that mean we should spew toxic waste into rivers or streams or spew smog into the air? Of course not. But global warming and CO2 being a cause of it... and "oceans rising" has MUCH junk science behind it.
Still, I hope this data is preserved. The truth will win out in the long term. (as is already starting to happen)
-- Rob McEwen
On 12/16/2016 4:48 PM, Hugo Slabbert wrote:
This started as a technical appeal, but: https://www.nanog.org/list 1. Discussion will focus on Internet operational and technical issues as described in the charter of NANOG. 6. Postings of political, philosophical, and legal nature are prohibited.
EXACTLY - but I had to finally respond because it was getting obnoxious... all the "we all think this way and we KNOW that the other side is wrong"--implications/statements embedded in various previous posts. -- Rob McEwen
On 12/16/2016 1:48 PM, Hugo Slabbert wrote:
This started as a technical appeal, but:
1. Discussion will focus on Internet operational and technical issues as described in the charter of NANOG.
Hard to see how the OP has anything to do with either of the above.
n Sat, Dec 17, 2016 at 6:15 PM, Doug Barton <dougb@dougbarton.us> wrote:
On 12/16/2016 1:48 PM, Hugo Slabbert wrote:
This started as a technical appeal, but:
1. Discussion will focus on Internet operational and technical issues as described in the charter of NANOG.
Hard to see how the OP has anything to do with either of the above.
Actually, it's not that hard ... *if* we can control ourselves from making them partisan, and focus instead on the operational aspects. (Admittedly, that's pretty hard!) The OP's query was a logical combination of two concepts: - First, from the charter (emphasis mine): "NANOG provides a forum where people from the network research community, the network operator community and the network vendor community can come together *to identify and solve the problems that arise in operating and growing the Internet*." - Second, from John Gilmore: "The Net interprets censorship as damage and routes around it." The OP appears to be managing risk associated with a (perhaps low) chance of future censorship. Was the OP asking a straight question about BGP or SFPs or CDNs? Of course not. But should doctors only talk about surgical technique -- and not about, say, the need for a living will? Of course not. IMO, *operational, politics-free* discussion of items like these would also be on topic for NANOG: - Some *operational* workarounds for country-wide blocking of Facebook, Whatsapp, and Twitter [1], or Signal [2] - The *operational* challenges of replicating the Internet Archive to Canada [3] Each operator has to make such risk calculations for themselves. Some may see the "NA" in NANOG as insurance that such censorship could never happen here. Others -- especially those who came from other countries -- may feel differently. Put another way: Everyone has a line at which "I don't care what's in the pipes, I just work here" changes into something more actionable. Being *operationally* ready for that day seems like a good idea to me. Royce 1. http://www.telegraph.co.uk/technology/2016/12/20/turkey-blocks-access-facebo... 2. http://www.nytimes.com/aponline/2016/12/20/world/middleeast/ap-ml-egypt-app-... 3. https://blog.archive.org/2016/11/29/help-us-keep-the-archive-free-accessible...
On 12/20/2016 8:08 AM, Royce Williams wrote:
n Sat, Dec 17, 2016 at 6:15 PM, Doug Barton <dougb@dougbarton.us> wrote:
On 12/16/2016 1:48 PM, Hugo Slabbert wrote:
This started as a technical appeal, but:
1. Discussion will focus on Internet operational and technical issues as described in the charter of NANOG.
Hard to see how the OP has anything to do with either of the above.
Actually, it's not that hard ... *if* we can control ourselves from making them partisan, and focus instead on the operational aspects. (Admittedly, that's pretty hard!)
The OP's query was a logical combination of two concepts:
- First, from the charter (emphasis mine): "NANOG provides a forum where people from the network research community, the network operator community and the network vendor community can come together *to identify and solve the problems that arise in operating and growing the Internet*."
- Second, from John Gilmore: "The Net interprets censorship as damage and routes around it."
[snip]
Everyone has a line at which "I don't care what's in the pipes, I just work here" changes into something more actionable.
Stretched far beyond any credibility. Your argument boils down to, "If it's a political thing that *I* like, it's on topic."
On Wed, Dec 21, 2016 at 04:41:29PM -0800, Doug Barton said: [..]
Everyone has a line at which "I don't care what's in the pipes, I just work here" changes into something more actionable.
Stretched far beyond any credibility. Your argument boils down to, "If it's a political thing that *I* like, it's on topic."
"If it's a politically-generated thing I'll have to deal with at an operational level, it's on topic." That work? /kc -- Ken Chase - math@sizone.org
On Wed, Dec 21, 2016 at 3:49 PM, Ken Chase <math@sizone.org> wrote:
On Wed, Dec 21, 2016 at 04:41:29PM -0800, Doug Barton said: [..]
Everyone has a line at which "I don't care what's in the pipes, I just work here" changes into something more actionable.
Stretched far beyond any credibility. Your argument boils down to, "If it's a political thing that *I* like, it's on topic."
I can see why you've concluded that. My final phrasing was indeed ambiguous. I would have hoped that the rest of my carefully non-partisan post would have offset that ambiguity.
"If it's a politically-generated thing I'll have to deal with at an operational level, it's on topic."
That work?
That is indeed what I was trying to say - thanks, Ken. Royce
I can't for the life of me see why we'd have to deal with it in the course of our jobs beyond calling someone and having them install more A/C. This is, flat-out, off topic. Andrew On Wed, Dec 21, 2016 at 9:15 PM, Royce Williams <royce@techsolvency.com> wrote:
On Wed, Dec 21, 2016 at 3:49 PM, Ken Chase <math@sizone.org> wrote:
On Wed, Dec 21, 2016 at 04:41:29PM -0800, Doug Barton said: [..]
Everyone has a line at which "I don't care what's in the pipes, I just work here" changes into something more actionable.
Stretched far beyond any credibility. Your argument boils down to, "If it's a political thing that *I* like, it's on topic."
I can see why you've concluded that. My final phrasing was indeed ambiguous. I would have hoped that the rest of my carefully non-partisan post would have offset that ambiguity.
"If it's a politically-generated thing I'll have to deal with at an operational level, it's on topic."
That work?
That is indeed what I was trying to say - thanks, Ken.
Royce
On Wed, 21 Dec 2016 21:54:42 -0500, Andrew Kirch said:
I can't for the life of me see why we'd have to deal with it in the course of our jobs beyond calling someone and having them install more A/C. This is, flat-out, off topic.
You don't have any fiber that runs into regen shacks in low-lying areas that didn't *used* to flood, do you? Ask Verizon how much fun they had getting salt water off underground copper after Sandy.
On 12/21/2016 06:15 PM, Royce Williams wrote:
On Wed, Dec 21, 2016 at 3:49 PM, Ken Chase <math@sizone.org> wrote:
On Wed, Dec 21, 2016 at 04:41:29PM -0800, Doug Barton said: [..]
Everyone has a line at which "I don't care what's in the pipes, I just work here" changes into something more actionable.
Stretched far beyond any credibility. Your argument boils down to, "If it's a political thing that *I* like, it's on topic."
I can see why you've concluded that. My final phrasing was indeed ambiguous. I would have hoped that the rest of my carefully non-partisan post would have offset that ambiguity.
There was no ambiguity, your argument was clear. I simply think you were wrong. :)
"If it's a politically-generated thing I'll have to deal with at an operational level, it's on topic."
That work?
That is indeed what I was trying to say - thanks, Ken.
Again, hard to see how the OP asking for assistance with his pet project fits any definition of "have to deal with at an operational level." But now I'm repeating myself, so I'll leave it at that. Doug
"If it's a politically-generated thing I'll have to deal with at an operational level, it's on topic." Hmm.. works for me.
and do not omit the amplification attack of endless rinse repeat of self-righteous pontification of what people should and should not post randy
i mind not one iota to store some on my computer but it won't be accessible because i don't want to publish it until i can get a dedicated server On 2016-12-22 09:17 PM, Randy Bush wrote:
"If it's a politically-generated thing I'll have to deal with at an operational level, it's on topic." Hmm.. works for me. and do not omit the amplification attack of endless rinse repeat of self-righteous pontification of what people should and should not post
randy
-- These ads are used to support jjovereats' sites.
On Tue, Dec 20, 2016 at 7:08 AM, Royce Williams <royce@techsolvency.com> wrote: [snip]
IMO, *operational, politics-free* discussion of items like these would also be on topic for NANOG:
- Some *operational* workarounds for country-wide blocking of Facebook, Whatsapp, and Twitter [1], or Signal [2]
[snip]
2. http://www.nytimes.com/aponline/2016/12/20/world/middleeast/ap-ml-egypt-app-...
Steering things back towards the operational, the makers of Signal announced today [1] an update to Signal with a workaround for the blocking that I noted earlier. Support in iOS is still in beta. The technique (which was new to me) is called 'domain fronting' [2]. It works by distributing TLS-based components among domains for which blocking would cause wide-sweeping collateral damage if blocked (such as Google, Amazon S3, Akamai, etc.), making blocking less attractive. Since it's TLS, the Signal connections cannot be differentiated from other services in those domains. Signal's implementation of domain fronting is currently limited to countries where the blocking has been observed, but their post says that they're ramping up to make it available more broadly, and to automatically enable the feature when non-local phone numbers travel into areas subject to blocking. The cited domain-fronting paper [2] was co-authored by David Fifield, who has worked on nmap and Tor. Royce 1. https://whispersystems.org/blog/doodles-stickers-censorship/ 2. http://www.icir.org/vern/papers/meek-PETS-2015.pdf
Simply put… if the data that is hosted on the sites aforementioned then cough up the damn space and host it. Data space is cheap as hell these days, parse it and get the hell on with it already. *Disclaimer* not meant to single out any one party in this conversation but the whole subject all together. Need someone to help mirror the data ? I may or may not be able to assist with that. Provide the space to upload it to and the direction to the data you want. But beyond all that. This subject is plainly just off topic.
On Dec 21, 2016, at 22:16, Royce Williams <royce@techsolvency.com> wrote:
On Tue, Dec 20, 2016 at 7:08 AM, Royce Williams <royce@techsolvency.com> wrote:
[snip]
IMO, *operational, politics-free* discussion of items like these would also be on topic for NANOG:
- Some *operational* workarounds for country-wide blocking of Facebook, Whatsapp, and Twitter [1], or Signal [2]
[snip]
2. http://www.nytimes.com/aponline/2016/12/20/world/middleeast/ap-ml-egypt-app-...
Steering things back towards the operational, the makers of Signal announced today [1] an update to Signal with a workaround for the blocking that I noted earlier. Support in iOS is still in beta.
The technique (which was new to me) is called 'domain fronting' [2]. It works by distributing TLS-based components among domains for which blocking would cause wide-sweeping collateral damage if blocked (such as Google, Amazon S3, Akamai, etc.), making blocking less attractive. Since it's TLS, the Signal connections cannot be differentiated from other services in those domains.
Signal's implementation of domain fronting is currently limited to countries where the blocking has been observed, but their post says that they're ramping up to make it available more broadly, and to automatically enable the feature when non-local phone numbers travel into areas subject to blocking.
The cited domain-fronting paper [2] was co-authored by David Fifield, who has worked on nmap and Tor.
Royce
1. https://whispersystems.org/blog/doodles-stickers-censorship/ 2. http://www.icir.org/vern/papers/meek-PETS-2015.pdf
-- Jason Hellenthal JJH48-ARIN
On Wed, Dec 21, 2016 at 8:03 PM, Jason Hellenthal <jhellenthal@dataix.net> wrote:
Simply put… if the data that is hosted on the sites aforementioned then cough up the damn space and host it. Data space is cheap as hell these days, parse it and get the hell on with it already.
*Disclaimer* not meant to single out any one party in this conversation but the whole subject all together. Need someone to help mirror the data ? I may or may not be able to assist with that. Provide the space to upload it to and the direction to the data you want. But beyond all that. This subject is plainly just off topic.
Jason, understood. I clearly should have updated the subject line of the thread, as you're not the first to continue to respond to the subject line, instead of what I've been recently saying. :) My most recent reply was about some operational aspects of country-wide Signal blocking, not the OP topic. I would almost consider updating the subject accordingly ... but at this point, it's clear that transcendence of the amygdala will continue to elude us, and this thread would apparently rather die than suffer my attempts to beat it into a plowshare. :) Royce
On Dec 21, 2016, at 22:16, Royce Williams <royce@techsolvency.com> wrote:
On Tue, Dec 20, 2016 at 7:08 AM, Royce Williams <royce@techsolvency.com> wrote:
[snip]
IMO, *operational, politics-free* discussion of items like these would also be on topic for NANOG:
- Some *operational* workarounds for country-wide blocking of Facebook, Whatsapp, and Twitter [1], or Signal [2]
[snip]
2. http://www.nytimes.com/aponline/2016/12/20/world/middleeast/ap-ml-egypt-app-...
Steering things back towards the operational, the makers of Signal announced today [1] an update to Signal with a workaround for the blocking that I noted earlier. Support in iOS is still in beta.
The technique (which was new to me) is called 'domain fronting' [2]. It works by distributing TLS-based components among domains for which blocking would cause wide-sweeping collateral damage if blocked (such as Google, Amazon S3, Akamai, etc.), making blocking less attractive. Since it's TLS, the Signal connections cannot be differentiated from other services in those domains.
Signal's implementation of domain fronting is currently limited to countries where the blocking has been observed, but their post says that they're ramping up to make it available more broadly, and to automatically enable the feature when non-local phone numbers travel into areas subject to blocking.
The cited domain-fronting paper [2] was co-authored by David Fifield, who has worked on nmap and Tor.
Royce
1. https://whispersystems.org/blog/doodles-stickers-censorship/ 2. http://www.icir.org/vern/papers/meek-PETS-2015.pdf
On Fri, Dec 16, 2016 at 1:35 PM, Rob McEwen <rob@invaluement.com> wrote:
On 12/16/2016 3:30 PM, Ken Chase wrote: A 39-inch rise in the ocean levels over the next century is based on fear-mongering and junk science designed to scare politicians into increasing grant $$ from the federal government. It is not based on science.
39 inches? I'm going to start laying fiber up and down I-5. I'll have the cheapest trans-ocean cable between Canada and Mexico... -A
On 16 Dec 2016, at 21:35, Rob McEwen <rob@invaluement.com> wrote: But global warming and CO2 being a cause of it.
http://climate.nasa.gov/evidence/ <http://climate.nasa.gov/evidence/> What sort of effects do you reckon a 35% increase in atmospheric CO2 over historical levels over the space of 65 years might lead to? - Mark
On 2016-12-16 10:58, Rich Kulawiec wrote:
This is a short-term (about one month) project being thrown together in a hurry...and it could use some help.
How much data are we talking about here? A few floppy disks ? a couple of megabytes ? gigabytes ? terabytes ? petabytes ? Have you considered giving "courtesy copies" to other environmental organistaions such as Environment Canada, Australian Bureau of Meteorology etc ?
participants (16)
-
Aaron C. de Bruyn
-
Andrew Kirch
-
DaKnOb
-
Doug Barton
-
Hugo Slabbert
-
Jason Hellenthal
-
Jean-Francois Mezei
-
Ken Chase
-
Large Hadron Collider
-
Mark Blackman
-
Randy Bush
-
Rich Kulawiec
-
Rob McEwen
-
Royce Williams
-
Steven Miano
-
Valdis.Kletnieks@vt.edu