2010 09 26

From TheCommandLineWiki
Jump to: navigation, search

Contents

News Cast for 9/26/2010

(00:17) Intro

(03:33) Security alerts

(03:52) De-constructing the insidious Stuxnet worm

  • http://www.wired.com/threatlevel/2010/09/stuxnet/
  • Kim Zetter explains why the Stuxnet worm drew attention of security researchers
  • It is an incredibly sophisticated piece of malware
  • Not just its code but its target makes it stand out
  • It does not go after credit card data or personal info
    • Rather it is most definitely targeting infrastructural systems
    • Specifically a particular model of Siemens SCADA system
  • SCADA stands for supervisory control and data acquisition
  • The worm has been on the loose since June
    • And is known to have hit 100K systems so far
  • A small Belarusan firm first spotted it in Iran
    • Which is also where most infections appear to have occurred
  • It was at first thought to be designed to steal intellectual property
  • The reasoning being it most likely was part of some industrial epsionage effort
  • After studying the worm in captivity
    • The latest findings show it is also capable of sabotage
  • Worse, the complexity and refinement of the code are considerable
  • This suggests the resources needed to build it
    • Are most likely those of a nation-state rather than a firm
  • In particular, writing the worm would have required extensive knowledge of the target
  • It doesn't just turn the SCADA appliances into zombies
    • But thoroughly and surgically alters the way they run
  • It is unclear what the result of this deep of an attack may be
    • Though one researcher thinks it is no less than real world damage
  • That's the theory of German researcher Ralph Langner
    • Who posted one of the most extensive write ups of the worm
  • Zetter goes on to detail some of the novel aspects of the malware
  • The one that caught my eye is it doesn't rely solely on a command-and-control server
  • It can operate in a peer to peer mode to spread updates
  • Many of those covering this story have speculated
    • That the worm was at least partially targeting Iran's nuclear reactors
  • Langner is the main proponent of this idea
    • Though all he points to as evidence
      • Is a screenshot of a console at a facility that appears to be the Siemens system in question
  • There is a lot of other circumstantial evidence in the article
  • It is possible that the incidents in question
    • Were caused by the worm
  • The biggest question this begs is why was the attack so seemingly diffuse?
  • Many other systems in other countries were also affected
  • Maybe that was a smokescreen, intended to confuse matters
    • Or in the nature of using an imprecise tool like a worm
      • No matter how sophisticated its actual payload
  • I think Occam's razor applies, here
    • That a national conspiracy of some sort is pretty unlikely
      • Though not impossible
  • The more people you bring in on a secret
    • And developing Stuxnet would have been a huge secret
    • The more likely someone is likely to let something more telling slip

(08:17) News

(08:31) Intel using DRM to charge for processor features

  • http://arstechnica.com/hardware/news/2010/09/intels-upgradable-processor-good-sense-or-utter-catastrophe.ars
  • This is a more complex story to unpack than it first appears
  • Peter Bright at Ars Technica has the details of Intel's test program
  • They are trying this in four markets, with a limited number of OEMs
  • The crippled CPU will cost $90 in bulk
  • Customers can pay $50 dollars to go online and unlock the CPU
  • The rational is that the lower cost chip will make systems using it more affordable
  • That includes the cost of a traditional upgrade down the road
  • Bright does a good job of explaining how this scheme is far from new
  • It does, in some ways, resonate with how over clocking a CPU works
  • Intel and AMD already disable or limit what CPUs can do
  • I always thought this was for reasons of quality
  • Rather than throwing out a lower quality instance of a higher speed design
    • They lock it down at capabilities it can run reliably
  • This is more cost effective and offers more options at more prices
  • Hardware hackers in the know often can tweak such cheap chips
    • Sometimes adding beefier cooling systems to offset their power thermal performance
  • One striking difference is the most you could do
    • Is void your warranty by overclocking a cheaper chip
  • It stands to reason that Intel would try to use the DMCA
    • To punish anyone trying to do the software equivalent
    • Of over clocking these intentionally disabled chips
  • Bright asks us, however, if a software unlockable chip
    • Is better than Intel cutting traces on the die
    • Removing all possibilities of getting more out of a cheaper chip
  • http://www.boingboing.net/2010/09/19/intel-drm-a-crippled.html
  • Cory, at Boing Boing, puts his finger on what is wrong with this scheme
  • Intel is not charging you to recover costs and make profit on the product as produced
  • I get Bright's arguments about how Intel can reduce overhead
    • By producing fewer varieties of chips and using software to re-introduce variety
      • On which they can hang commodity and premium price points
  • Here is where the notion breaks down, though
  • The software upgradable chip is more expensive than a comparable chip
    • Already running with the same maxed out capabilities
  • Bright does a good job of guessing Intel's reasoning here
  • The higher total cost is still cheaper than buying two chips
  • Isn't that so much better for the low end consumer
    • Who just couldn't afford to keep their cheaper PC running longer?
  • Cory's citation of the "If value, then right" theory coined by Siva Vaidhyanathan
    • Helps to understand why this reasoning is ultimately flawed
  • Intel is charging for what we could have done, not what we actually did
  • If I were to buy a cheaper processor, then a better one later
    • I would have two processors when I was done
  • I could re-use that cheap processor to build my kids a cheap PC
    • Or I could sell that old processor to defray some of my cost
    • Or donate it through a charity program for schools to also lessen my cost
  • With the software upgrade, it is absolutely not the equivalent of buying two chips
  • I am being charged as if I had gone that traditional route
    • But deprived of the options I'd get for owning two chips
      • Instead of one differentiated by software

(12:11) Ubuntu designer shares thoughts on context aware UI

  • This is encouraging news given that the computer desktop interface
    • Really has changed so little since it was cemented in early workstations, PCs
  • Apple, long hailed for its radical design thinking
    • Has really only made incremental changes to the core concepts
      • Of Windows, Icon, Menu and Pointer, or WIMP
  • http://linux.slashdot.org/story/10/09/21/1247213/Canonical-Designer-Demos-Ubuntu-Context-aware-UI
  • Slashdot links ultimately to a blog post by an Ubuntu designer, Christian Giordano
  • He points to the adoption of motion sensing in other devices
    • Like the Wii and the iPhone
  • It makes sense once he points it out
    • Much like the way touch has been creeping into general purpose computers
      • From the realm of mobile computing, especially smart phones
  • Giordano goes on to spitball a few scenarios
  • None of these required fine grained knowledge of where the user is
  • Rather they imagine concepts like leaning back from the screen
    • Triggering a video player going into full screen mode
  • Or the computer showing desktop notifications much larger
    • If a user is not sensed in front of the screen
  • He even has a video of some code he mocked up
  • The proof of concept uses Processing and some facial recognition
  • It demonstrates the scaling he was talking about
    • When the user moves away from the screen
  • I really like the idea of using parallax
  • As you see him leaning to the side in the video
    • You get a sense of how that motion would feel very natural
    • To peaking around a foreground window
    • To see something in one of the windows it overlaps
  • I find the ideas here immensely appealing
  • It puts the sensors and processing power in the computer
    • To the task of interpreting more about the user's intent
  • Seeing it in action, it isn't too far different
    • From opportunistically scripting tasks
      • Making the computer do easily expected, repeated tasks
  • What Giordano is doing is inviting us to invent a new vocabulary
  • Let's go beyond the clumsy abilities of mousing and clicking
    • Or even memorizing an arcana of keyboard shortcuts
  • His post and demo are not meant to be specifications
  • He is very clear to explain he just wants to start a conversation
  • This is a discussion I think is sorely needed
    • And to be honest could be wickedly fun
  • The most interesting questions to explore
    • Will be where motion tracking, maybe even other sensing
      • Has legs to replace or extend the metaphors with which we are familiar
  • The parallax windows example is the most compelling to me
    • But are there other examples that are just as general
      • As this very natural way of re-interpreting window management?

(15:30) Anthropology of hackers

  • http://www.boingboing.net/2010/09/21/anthropology-of-hack.html
  • Cory at Boing Boing linked to a syllabus as an essay posted at The Atlantic
  • It is based on an anthropology of hackers course being taught
    • By Gabriella Coleman, an assistant professor at NYU
  • The intro notes that there haven't been any serious investigations
    • Into this subject but I disagree
  • They may not have been as academically informed, using principles of anthropology
    • But Steven Levy's "Hackers" and others have explored what makes hackers tick
  • Regardless, Coleman starts in a good place, dispelling the myths
    • Of hackers as attackers of systems and nefarious characters
  • Her starting definitions are simple
  • 'A "hacker" is a technologist with a love for computing and a "hack" is a clever technical solution arrived through a non-obvious means.'
  • The breadth and depth of the course is pleasantly surprising
  • Over the course of thirteen weeks
    • Coleman explores several sub-groups within hackers
  • In particular she teases apart some of the subtle distinctions
    • In both their motivating principles and their resulting actions
  • She also ties hacking culture into the larger fabric of history and society
  • What she touches on their resonates with the introduction to my podcast
  • There really is a strong intersection between society and technology
    • And hackers are the ones most often exploring that interface
  • The course also tries to avoid the mistakes other efforts have made
    • In trying to generalize or abstract out what it is to be a hacker
  • She unflinchingly digs into the ethical gray areas that often arise
    • From the full depth of the hacker ethic
      • To be more strongly motivated by notions of freedom and curiosity
      • Than traditional bounds of society at large
  • The conclusion acknowledges the few areas the course isn't able to cover
  • In particular, information security is a pretty deep area in itself
    • Students are guided to additional reading materials on the topic
  • There also is not as much material to consider
    • For hackers and hacking outside of North America and Europe
  • Coleman has been teaching this class since 2007
  • Each year she also tries to tie in some current events
  • This year, she'll be touching on WikiLeaks
    • As well as hacking the workplace, using Scott Rosenberg's "Dreaming in Code" as reference
  • That is an excellent book, one I've read and from which I've taken inspiration
  • I really wish I was closer to New York so I could audit this course
  • It sounds utterly fascinating and I am jealous of anyone able to take it
  • It definitely sounds like more than what a single book can cover
    • Though perhaps Coleman would consider distilling some of this into a book or series of books
  • Maybe she might consider packaging it up into an Open Courseware offering
    • Or put it together in some other fashion
      • So that teachers at other schools to use it as a basis for a similar offering
  • I could easily see UMD's school of digital humanities offering a class like this
    • Maryland Institute for Technology in the Humanities
    • ttp://mith.umd.edu/

(18:34) FCC approves wireless networking over white spaces

  • I have been following the development of television white spaces for a while
  • I even had some recent follow ups
  • All of the discussion at the FCC led to a final ruling this past week
  • http://arstechnica.com/tech-policy/news/2010/09/wifi-on-steroids-gets-final-rules-drops-spectrum-sensing.ars
  • As Nate Anderson at Ars Technica explains
    • All five commissioners unanimously approved the use of open TV channels
      • For unregulated wireless networking
  • Opposition to date has largely come from broadcasters and makers of other unregulated radio devices
    • In particular microphone manufacturers who I think
      • Have been squatting in the white spaces for a while
  • One of the proposed compromises, using active listening before hitting an empty channel
    • Was dropped from the requirements
  • Proponents of white space for wireless feared that the additional components required
    • Could stall development of consumer devices
  • Instead, the remaining requirement is that white space technology must check a geo-location database
  • It would contain the information about what television channels are being used where
    • And which ones then are unused and hence open for use
  • The commission seemed very aware that this was a landmark moment
  • Federal bodies rarely move very quickly, as one pointed out
    • This process started back in 2002 and was actually approved in 2008
  • What happened this week was to finalize the rules for new technologies
  • Anderson reminds us that Microsoft already has a white space trial running
    • I also shared a link recently that Google is doing the same
  • Both tech companies represent the interest in white spaces pretty well
  • Much of the current debate around internet policy
    • Runs aground of the fact that in the US we are bound by the last mile
  • At most, customers have two or three options from which to choose
  • In much of the country, there is far less choice because of the costs of infrastructure
  • Spectrum in the television band has the potential to help network rural areas
    • As it propagates well over long distances
  • As well as in urban areas traditionally better served
    • Because signals have better ability to penetrate buildings and other obstacles
  • If this is developed like WiFi as much of the press is speculating
    • Then it also opens up the field to many more players
      • Without the limiter of investing in much more wired infrastructure
      • Or dealing with the bureaucratic hassles of getting rights of way to pull new wire
  • http://arstechnica.com/tech-policy/news/2010/09/fcc-commish-no-need-for-net-neutrality-we-have-white-spaces.ars
  • Unfortunately, I think commissioner McDowell takes this reasoning too far
  • In his comments on the rule making, he thinks we have less, perhaps even no need
    • For network neutrality since the barrier to entering white spaces is so much lower
  • He may be right, on a certain level, that competitive pressure
    • In an entirely unregulated space will help solve the problem
  • Much like WiFi has added an interesting if weaker wrinkle to mobile computing
  • But I think we'd still be well served to work through the implications
    • Of what it means for a carrier to be neutral
    • And how they are transparent to customers and accountable when they fail
  • http://arstechnica.com/tech-policy/news/2010/09/fcc-white-space-rules-inside-the-satanic-details.ars
  • If you are curious about the very detailed specifices
    • Especially about how dropping spectrum sensing may still present problems
  • Matthew Lasar at Ars goes much deeper into the specifics of the rules
  • As he notes, things may still change a bit, more to refine the rules
  • It remains to be seen how well the decisions the FCC made work out when implemented
  • I suspect, and Lasar implies, that opponents will still stir up trouble as we move forward

(22:43) Following Up

(23:02) MPAA wants to know if ACTA can be used to block WikiLeaks

  • One of the biggest problems with the DMCA
    • Is that it has been put to many questionable uses beyond copyright infringement
  • Now we have our first hint of similar abuses of ACTA
    • Even before the agreement is finalized and signed
  • http://www.techdirt.com/articles/20100915/10324411026/mpaa-wants-to-know-if-acta-can-be-used-to-block-wikileaks.shtml
  • Mike Masnick at Techdirt discusses a report out of Mexico
    • Put together by Open Acta Mexico on an open information meeting on ACTA
    • Held at the Ministry of the Economy in Mexico last week
  • He calls attention to two points
  • One is a procedural question about the coordination
    • Or possible lack thereof
    • Between negotiators and elected representatives
  • The other and far more chilling quirk is the presence of an MPAA representative
  • Specifically, that person asked after using ACTA against dangerous sites like Wikileaks
  • The Open Acta Mexico folks asked the obvious question, what does this have to do with movies?
  • Masnick speculates it was a rhetorical test balloon
  • It certainly is consistent with the growing trend of using copyright infringement
    • As an excuse to impose censorship on targets otherwise hard to do so

(24:11) Judge quashes one of the mass infringement multi-party subpoenas

  • http://arstechnica.com/tech-policy/news/2010/09/judge-puts-hammer-down-on-hurt-locker-p2p-subpoenas.ars
  • Nate Anderson at Ars Technica has some good news about the mass subpoenas
    • Being sought by the USCG to pursue infringers on behalf of the makers of the file, Hurt Locker
  • USCG is one of now a few law firms pressing claims against many, many folks
    • But motivated purely by the desire to get cash settlements
      • Rather than try to change behavior or send a message
      • As the RIAA was claiming to be doing with its notorious law suits
  • Other sub-poenas have come under fire for procedural shenanigans
  • In the DC district court, the question of attaching too many parties arose
  • Pushing through such a quantity of demands at once
    • Clearly stacks the deck in favor of the USCG
    • But subverts how subpoenas and law suits should proceed, based on merits and specific facts
  • In this instance, as Anderson explains
    • USCG didn't take that tactic of pressing too many claims in one go
  • Rather, though, a South Dakota judge took issue with jurisdictional questions
  • Anderson shares the relevant statutes that judge John Simko cited
    • And ruled that USCG had failed to observe
  • Simko also didn't look kindly on the sub-poena to Midcontinent, the ISP in question
    • Only being sent by FAX, noting that it should have been served in person
      • Or by registered mail at the least
  • The USCG has not replied to Sinko's ruling and appears to be letting it slide
  • Anderson speculates that may be a result of calculating return on investment
  • I infer that there is a troubling further question
  • If USCG lets subpoenas go without contesting moves to squelch them
    • Could they just keep going on flooding the courts
      • As long as the defendants, IP addresses and ISPs are new in each case?
  • Is it possible for a judge to haul them up short enough to change the cost to them?
  • I don't know and so far it hasn't happened
  • We probably won't know until some judge elevates the issue to that level of concern
    • Rather than responding piece meal to each subpoena or claim as it enters the docket

(26:26) Outro

Personal tools