Monday, March 31, 2008

Verizon "Northern States" Deal is Complete with Fairpoint

Yesterday, the New Hampshire Public Utilities Commission held a last minute meeting to approve the Verizon sale of Maine, New Hampshire and Vermont to Fairpoint Communications. Regulators were concerned when they learned last week that Fairpoint had to spend an additional $17 million on bonds in order to finance the Verizon deal for the 3 states.

Today, a Verizon press release explained some detail on how the transaction has been handled. Here's a piece from that press release:

Verizon Communications Inc. (NYSE:VZ) today announced the completion of the spin-off of the shares of Northern New England Spinco Inc. (Spinco) to Verizon stockholders. Spinco held specified assets and liabilities that were used in Verizon's local exchange business and related activities in Maine, New Hampshire and Vermont. Immediately following the spin-off, Spinco merged with FairPoint Communications, Inc. (NYSE: FRP), resulting in Verizon stockholders collectively owning approximately 60 percent of FairPoint common stock.

Verizon stockholders are receiving one share of FairPoint common stock for every 53.0245 shares of Verizon common stock they owned as of March 7, 2008. This is equivalent to 0.0189 shares of FairPoint common stock for each share of Verizon common stock owned as of March 7, 2008. FairPoint will pay cash in lieu of any fraction of a share of FairPoint common stock.

Here's a brief summary of the deal collected from various sources:

Verizon received a $1.16 billion cash payment from Spinco
Verizon will receive
$551 million of 13 1/8% senior notes due in 2018 issued by Spinco
Verizon loses over $1.4 billion in debt

Verizon gets over $500 million in tax writeoffs

Fairpoint gets 1.6 million new landline customers

Fairpoint gets 230,000 new DSL customers
Fairpoint gets over 2,500 new employees from Verizon

Fairpoint becomes the eigth largest telecom company in the United States

It was a big effort for the 3 states, Fairpoint and Verizon to get this deal done.

MacBook Air Hacked

The ninth annual CanSecWest 2008 Conference was held last week in Vancouver, British Columbia. CanSecWest focuses on applied digital security, bringing together ndustry luminaries in a relaxed environment that promotes collaboration and social networking.

A crowd favorite at the conference has been the hacking contest and last week the tradition continued. This year's target machines were Ubuntu, Vist and OSX based. Here's details on the contest from the CanSecWest website:

Three targets, all patched. All in typical client configurations with typical user configurations. You hack it, you get to keep it.

Each has a file on them and it contains the instructions and how to claim the prize.

Targets (typical road-warrior clients):

  • VAIO VGN-TZ37CN running Ubuntu 7.10
  • Fujitsu U810 running Vista Ultimate SP1
  • MacBook Air running OSX 10.5.2

This year's contest will begin on March 26th, and go during the presentation hours and breaks of the conference until March 28th. The main purpose of this contest is to present new vulnerabilities in these systems so that the affected vendor(s) can address them. Participation is open to any registered attendee of CanSecWest 2008.

Once you extract your claim ticket file from a laptop (note that doing so will involve executing code on the box, simple directory traversal style bugs are inadequate), you get to keep it. You also get to participate in 3com / Tipping Point's Zero Day Initiative, with the top award for remote, pre-auth, vulnerabilities being increased this year. Fine print and details on the cash prizes are available from Tipping Point's DVLabs blog.

Quick Overview:

  • Limit one laptop per contestant.
  • You can't use the same vulnerability to claim more than one box, if it is a cross-platform issue.
  • Thirty minute attack slots given to contestants at each box.
  • Attack slots will be scheduled at the contest start by the methods selected by the judges.
  • Attacks are done via crossover cable. (attacker controls default route)
  • RF attacks are done offsite by special arrangement...
  • No physical access to the machines.
  • Major web browsers (IE, Safari, Konqueror, Firefox), widely used and deployed plugin frameworks (AIR, Silverlight), IM clients (MSN, Adium, Skype, Pigdin, AOL, Yahoo), Mail readers (Outlook, Mail.app, Thunderbird, kmail) are all in scope.
Here's the results according to Heiss Online:

Of three laptops to be hacked, the MacBook Air with Mac OS X 10.5.2 was the first to fall victim to crack attempts of participants in the PWN to OWN contest at CanSecWest. The laptops with Windows Vista SP1 and Ubuntu 7.10 remain uncompromised. According to information provided by organizers of the TippingPoint competition, Charlie Miller, Jake Honoroff and Mark Daniel of security service provider Independent Security Evaluator were able to take control of the device through a hole in the Safari web browser. The vulnerability has supposedly not yet been made public and is still under wraps until Apple is able to provide a patch. In addition to $10,000 prize money, the winners also get to keep the MacBook as a bonus.

Here's more on the contest from ChannelWeb:

The vulnerability has been purchased by the Zero Day Initiative, and has been made known to to Apple, which is now working on the issue, TippingPoint said. "Until Apple releases a patch for this issue, neither we nor the contestants will be giving out any additional information about the vulnerability."

Wednesday, March 26, 2008

Nine Inch Nails and Creative Commons: Share, Remix, Reuse — Legally

Nine Inch Nails, a popular alternative/industrial band, earlier this month released a 36 track instrumental collection titled Ghosts I–IV. If you are not familiar with Nine Inch Nails, the band was started in 1988 by Trent Reznor who, as the only official member, is the producer, singer, songwriter, and instrumentalist.

For Ghost I-IV, Reznor has chosen a unique distribution method. Fans can get the first volume free by downloading off the web, they can pay $5 and download all 4 volumes from Amazon, they can purchase a double CD for $10, a deluxe edition set for $75 or buy a $300 (sold out) ultra-delux limited (2500 copies) edition set. Fans will also be able to purchase a $39 vinyl edition starting the first week of April.


What I find most exciting is the fact the album is licensed under
Creative Commons . If you are not familiar with Creative Commons licensing, here's a piece from the organization website:

Creative Commons provides free tools that let authors, scientists, artists, and educators easily mark their creative work with the freedoms they want it to carry. You can use CC to change your copyright terms from "All Rights Reserved" to "Some Rights Reserved."

And here's more on the process from the site:

Creators choose a set of conditions they wish to apply to their work:

Attribution. You let others copy, distribute, display, and perform your copyrighted work — and derivative works based upon it — but only if they give credit the way you request.

Noncommercial. You let others copy, distribute, display, and perform your work — and derivative works based upon it — but for noncommercial purposes only.

No Derivative Works. You let others copy, distribute, display, and perform only verbatim copies of your work, not derivative works based upon it.

Share Alike. You allow others to distribute derivative works only under a license identical to the license that governs your work.

Reznor has decided to both give the music away and charge for it. In addition, by using Creative Commons licensing, he is giving others rights that they would not have had he used traditional music publishing licensing. Other bands, publishing companies, record companies, businesses, etc will be watching this closely.

Tuesday, March 25, 2008

FCC Expands and Improves Broadband Data Collection

Last week, the Federal Communications Commission (FCC) adopted an order that will increase the precision and quality of broadband subscribership data collected every six months from broadband services providers.

I've written in the past about how the Federal Communications Commission currently defines broadband in the United States:

The Federal Communications Commission (FCC) generally defines broadband service as data transmission speeds exceeding 200 kilobits per second (Kbps), or 200,000 bits per second, in at least one direction: downstream (from the Internet to your computer) or upstream (from your computer to the Internet)."

I've also written in the past about how the FCC currently collects broadband data. Here's a piece from a press release by House Subcommittee on Telecommunications and the Internet Chairman Ed Markey describing the current FCC broadband data collection methods:

"...the fact that current data collection methods used by the Federal Communications Commission (FCC) are inadequate and highly flawed. Currently, the FCC counts a single broadband subscriber in a 5-digit zip code as indicating the entire zip code has broadband availability, even if the sole subscriber is a business and not a residential consumer. This can lead to highly inaccurate and overly generous notions of actual broadband availability and use, particularly in rural areas where zip codes are quite large."

Here's a brief outline of the new order summarized from an excellent post at CNET's News.com:

200Kbps speeds are no longer considered "broadband"
768Kbps, typical entry-level speed offered by major DSL providers, will be considered the low end of "basic broadband," a range that extends to under 1.5Mbps.

Broadband service speeds will have to be reported both for uploads and downloads
Previously the FCC had six big categories of broadband speeds, and they effectively only tracked download speeds. Now the agency says it will require reporting on upload speeds.

Upload and download speeds will have to be reported in a more specific way
At the moment, the broadband speeds most commonly offered by cable and telephone companies are lumped into two major categories: those between 200Kbps and 2.5Mbps, and those between 2.5Mbps and 10Mbps. The FCC's new rules would require them to be broken down further, in an attempt to address charges that the current buckets have the potential to overstate the number of high-end subscriptions and understate the number of low-end subscriptions. Those new tiers will be: 1) 200Kbps to 768Kbps ("first generation data"); 2) 768Kbps to 1.5Mbps ("basic broadband"); 3) 1.5Mbps to 3Mbps; 4) 3Mbps to 6Mbps; and 5) 6Mbps and above.

Internet Service Providers (ISPs) will be required to report numbers of subscribers, and at the census-block level
The ISP's will have to report the number of subscribers in each census tract they serve, broken down by speed tier. The FCC decided to use census tracts because researchers may be able to use other demographic statistics collected by the U.S. Census, such as age and income level, to gain insight about what drives broadband penetration rates.

In addition, ISPs will not have to report the prices they charge at this time but like will have to in the future. You can read the March 19 FCC press release titled "FCC EXPANDS, IMPROVES BROADBAND DATA COLLECTION" here.

This is a significant and necessary decision that we all should be excited about.

Sunday, March 23, 2008

The FCC 700 MHz Auction Results Podcast

Mike Q and I recorded "The FCC 700 MHz Auction Results" podcast today. Below are the show notes. You can listen directly by turning up your speakers and clicking here. If you have iTunes installed you can get this one, listen to others, and subscribe to our podcasts by following this link.

If you don't have iTunes and want to listen to other podcasts and read shownotes using your web browser, turn up your speakers and click here.

Shownotes:


Intro: On March 18, FCC Auction 73 bidding round 261 ended and, after 38 days and $19.592 billion
in bids (almost double the $10 billion the FCC had hoped for), the FCC closed out the auction. In this podcast we review and discuss the auction results.

Mike: Gordon, can you give us an overview of the auction results?
Sure Mike - this comes from the FCC auction website linked up in the shownotes.

Rounds: 261 (started on 1/24 and ended on 3/18)
Bidding Days: 38
Qualified Bidders: 214
Winning Bidders: 101 Bidders won 1090 Licenses

Auction 73 concluded with 1090 provisionally winning bids covering 1091 licenses and totaling $19,592,420,000, as shown in the Integrated Spectrum Auction System. The provisionally winning bids for the A, B, C, and E Block licenses exceeded the aggregate reserve prices for those blocks. The provisionally winning bid for the D Block license, however, did not meet the applicable reserve price and thus did not become a winning bid. Accordingly, Auction 73 raised a total of $19,120,378,000 in winning bids and $18,957,582,150 in net winning bids (reflecting bidders' claimed bidding credit eligibility), as shown above.

Mike: Before we get into the auction results, can you give us an overview of the different spectrum blocks? I know we've done this before but - how about a quick refresher?

Sure Mike - this comes from a blog I wrote back on January 14.

Back in 2005 Congress passed a law that requires all U.S. TV stations to convert to all digital broadcasts and give up analog spectrum in the 700 MHz frequency band. This law will free up 62 MHz of spectrum in the 700 MHz band and effectively eliminate channels between 52 and 69. This conversion, which has a deadline of February 18, 2009, has freed up spectrum that is being split up by the FCC into five blocks:

  • A-Block - 12 MHz, split up into 176 smaller economic areas
  • B-Block - 12 MHz, split up into 734 cellular market areas
  • C-Block - 22 MHz, up into 12 regional licenses
  • D-Block - 10MHz, combined with approximately 10MHz allocated for public safety, a single national license.
  • E-Block - 6 MHz, split up into 176 smaller economic areas
So in summary, each spectrum block in the 700 MHz auction, except for the national public safely D-Block, has been assigned an area designation by the FCC.
All FCC areas, along with names, county lists, maps and map info data can be found on the Commission's website linked here.

Mike: How about a quick review of the D-Block again?

Sure Mike, this also comes from that January 14 blog:

The D-Block lately has been most interesting to watch. Early on it appeared Frontline Wireless would be one of the biggest bidders for D-Block spectrum - the company was setup for D-Block and had worked closely with the FCC on putting together specifications for the spectrum. Frontline built a formidable team including Vice Chairman Reed Hundt, who served as Chairman of the FCC between 1993 and 1997. The business plan, the organization, the technology seemed to all be in place........ On January 12 the company placed the following statement on their website:

Frontline Wireless is closed for business at this time. We have no further comment.

Another company, Cyren Call also looked like they were planning to bid on the D-Block Auction but did not.

What happen? Rumor has it Frontline could not attract enough funders - it seemed like a good investment - or at least you may think so up front. Many are now asking if the FCC's approach to solving the public safety inter-operability problem is in trouble.

Mike: OK, how about the results?
Here's a summary from the Wall Street Journal:

Verizon and AT&T accounted for 80% of the nearly $20 billion AT&T agreed to pay $6.6 billion for 227 spectrum licenses in markets covering much of the country. Verizon Wireless, a joint venture of Verizon Communications Inc. and Vodafone Group PLC, won 109 licenses for $9.4 billion.

Dish Network Corp., which bid for spectrum through Frontier Wireless LLC, did acquire a significant footprint, winning 168 licenses throughout the country for $712 million. Satellite-TV providers are looking for a way into the high-speed Internet business to better compete with cable and phone companies. But Credit Suisse analyst Chris Larsen said in a research note that the particular segment of spectrum Dish acquired would make it difficult for the company to offer interactive wireless broadband service. He said the company could use the spectrum to broadcast data or for on-demand video.

Google had indicated interest in a nationwide package of licenses before the auction, but it bid just high enough to trigger rules that will force winners of one segment of spectrum, known as the C-block, to allow any mobile devices and applications on their networks. Verizon won the lion's share of spectrum in this segment. Google had pushed for the regulation since its efforts to sell some mobile services had been stymied by major carriers, which traditionally have strictly limited the kinds of devices that consumers could use on their networks. Even before the auction had wrapped up, Google scored a victory as Verizon voluntarily agreed to open its network to devices it doesn't sell through its own retail network. Verizon released details of its new policy on Wednesday.

Mike: Were there any licenses that dod not get any bids?
There were 1,099 licenses auctioned and only eight did not receive any bids:

A-Block:
Lubbock, Texas
Wheeling, W.Va.

B-Block:
Bismarck, N.D.
Fargo, N.D.
Grand Forks, N.D.

Lee, Va.

Yancey, N.C.

Clarendon, S.C.


Mike: So, what will happen to these?

These licenses will need to be re-auctioned by the FCC. I'm guessing they were over priced, the FCC will end up dropping the re-auction minimum bid and they will end up going quickly.

Mike: What's going to happen with D-Block?
The Public Safety D-Block did not meet the minimum bid and the FCC will have to decide what to do. It looks like the FCC could go one of two directions for the re-auction - drop the price or change the requirements.

From the start, the public safety D-Block auction was seen as one of the biggest auction challenges...... I've expressed my opinion on the D-Block in the past........ the FCC still has some major work ahead before they can close this one out.

This comes from InfoWorld:

On Thursday, the FCC voted to de-link the so-called D block from the rest of the auction results. The D block was a 10MHz block that was to be paired with another 10MHz controlled by public safety agencies, and the winning bidder would have been required to build a nationwide voice and data network to serve both public safety and commercial needs. But the FCC failed to receive its $1.33 billion minimum bid for the D block, with the lone $472 million bid coming from Qualcomm.

The FCC has no plans to immediately reauction the D block, a spokeswoman said. Instead, the agency "will consider its options for how to license this spectrum in the future," the FCC said in a news release.

Mike: So, it looks like the big carriers won?
For the most part, yes. Kevin Martin had an interesting quote in an EFluxMedia piece though:

"A bidder other than a nationwide incumbent won a license in every market," FCC chairman Kevin Martin said hinting that it’s possible for a "wireless third-pipe" competitor to emerge in every market across the U.S. This would increase the competition and the first one to benefit from it will be the consumer.

Things still could get interesting!

Saturday, March 22, 2008

Anime - Do You Get It?

Up eary this morning with my 16 year old daughter for a 1.5 hour drive into the Hynes Convention Center in Boston for Anime Boston 2008.

Anime Boston is an annual three-day Japanese animation convention held in Boston, Massachusetts and I'm posting this entry here at the convention. Anime Boston has grown considerably since the first Boston event was held in 2003. The conference is, once again (according to the conference website), presenting popular events which include a masquerade, an anime music video contest, video programming rooms, an artists' alley and art show, karaoke, game shows, video games, manga library, dances and balls, and much more...

I'll get to our experience here in a minute. Last night the conference kicked off with opening ceremonies. Conference Co-Chair Andrea Finnin, asked what has been referred to at the conference as a very frank question and a great answer. Here's both:

How many of you out there have parents who just don't get it? Brothers? Sisters? Friends? Significant others?

Well, this weekend you will all be surrounded by people who get it.... You and all the other attendees will be part of this big family who gets it!


I'll be honest - it's something I've tried to get but maybe have not got just yet. I think I came a little closer today though. We got to the convention center at 9:00 and waited in a registration line for over three hours to get in. Yesterday afternoon, many stood in line for over 6 hours, only to be turned back.....


Standing in line with thousands of teenage gamers is probably the last thing someone my age would want to do on a Saturday morning. That said, I don't think I've ever been around such a large group of respectful, complementary and helpful people - of any age. For example, I have not heard one of George Carlin's "7 words you are not allowed to broadcast" today yet!
If I were to compare the adult crowd at a Red Sox game late last summer to the people that are here today..... well..... there is no comparison.


Anime conventions are held in most of the major cities in the country - maybe you have children or grandchildren, nieces, nephews, etc......
Check one out if you get a chance and maybe you'll start to "get it" too.
See my photo-blog of the conference here: http://gsnyder.tumblr.com

Wednesday, March 19, 2008

FCC 700 MHz Auction is Over..... Sort Of

Yesterday afternoon, FCC Auction 73 bidding round 261 ended and, after 38 days and $19.592 billion in bids (almost double the $10 billion the FCC had hoped for), the FCC closed out the auction.

Eight of the C-Block regional licenses had a reserve price of $4.6 billion that, when passed early in the auction (round 17), triggered an open access provision in the auction. This meant bidders could bid on individual C-Block licenses and, it appears the 12 C-Block licenses may be split up among bidders. Most experts are predicting Verizon and/or AT&T will take most of the C-Block while a smaller group still believes Google will be in the mix.

The C-Block provision requires the C-Block winner(s) to give access to
any device compatible with the network’s chosen technology. This open access provision was pushed hard by Google and, whether Google is a winning bidder or not, Google will have access to this spectrum.

There were 1,099 licenses auctioned and only eight did not receive any bids:

A-Block:
Lubbock, Texas
Wheeling, W.Va.

B-Block:
Bismarck, N.D.
Fargo, N.D.
Grand Forks, N.D.

Lee, Va.

Yancey, N.C.

Clarendon, S.C.


These licenses will need to be re-auctioned by the FCC. I'm guessing they were over priced, the FCC will end up dropping the re-auction minimum bid and they will end up going quickly.

The Public Safety D-Block did not meet the minimum bid and the FCC will have to decide what to do. It looks like the FCC could go one of two directions for the re-auction - drop the price or change the requirements. House Telecom Subcommittee Chairman Edward Markey (D-Mass.) is quoted on the D-Block in a post by RCRWirelessNews:

“I believe that any new auction for the ‘D-block’ should be consistent with an overarching policy goal of advancing public-safety objectives and ultimately achieving a state-of-the-art, broadband infrastructure for first responders. In developing a plan for a re-auction of the ‘D-block,’ the FCC should also take into account the auction results to gauge the level of new competition achieved. Policymakers should also analyze whether a need for a high reserve price continues to exist. Moreover, I believe we must fully review the nature and authority of the public safety spectrum trust and whether this model should be retained or modified, the length of the license term, the build-out requirements and schedule of benchmarks for such build-out, the opportunities for ensuring further openness in wireless markets, the penalties associated with failure to fulfill license conditions, and other issues.”

From the start, the public safety D-Block auction was seen as one of the biggest auction challenges...... I've expressed my opinion on the D-Block in the past........ the FCC still has some major work ahead before they can close this one out.

Monday, March 17, 2008

Banning Video Games in Boston?

There's been a lot of controversial discussion here in Massachusetts - Boston Mayor Menino wants to ban retailers from selling violent video games to people under the age of 18. In an article today describing Menino's proposal, the Boston Herald published a list of some of the most violent games along with brief descriptions:

NARC - Player takes the role of a narcotics agent attempting to take a dangerous drug off the streets and shut down the KRAK cartel while being subject to temptations, including drugs and money. To enhance abilities, players can take drugs, including pot, Quaaludes, ecstasy, LSD and “Liquid Soul” which provides the ability to kick enemies’ heads off.

Grand Theft Auto: San Andreas - Player is a young man working with gangs to gain respect. His mission includes murder, theft, and destruction on every imaginable level. Player recovers his health by visiting prostitutes, then recovers funds by beating the prostitutes to death and taking their money. Player can wreak as much havoc as he likes without progressing through the game’s storyline.

Condemned - A creepy first-person shooter where gamers are on the trail of a serial killer. It features ferocious hand-to-hand combat scenes where gamers use an array of blunt objects such as metal pipes, nail-covered 2-by-4s, fire axes, sledgehammers and signposts to elimate a host of deranged enemies.

Resident Evil 4 - The gamer is a special forces agent sent to recover the president’s kidnapped daughter. During the first minutes of play, it’s possible to find the corpse of a woman pinned to a wall by a pitchfork through her face.

50 Cent: Bulletproof - The game is loosely based on the gangster lifestyle of rapper Curtis ‘50 Cent’ Jackson. Player engages in gangster shootouts and loots the bodies of victims to buy new 50 Cent recordings and music videos.

A second Boston Herald article describes how gamers are claiming Menino’s proposal is unconstitutional, citing nine federal court decisions that have rejected similar bids in recent years. Editor of GamePolitics.com and blogger for the Entertainment Consumers’ Association Dennis McCauley, is also quoted in the second Herald article:

We don’t believe that a 10-year-old should be playing Grand Theft Auto, but it really is the parent’s responsibility to decide what the child should and shouldn’t play.

These are certainly games any parent would not want their young children playing.

Sunday, March 16, 2008

The Next-Generation Internet: IPv6 Overview Podcast

Mike Q and I recorded "The Next-Generation Internet: IPv6 Overview" podcast today. Below are the show notes. You can listen directly by turning up your speakers and clicking here.

If you have iTunes installed you can get this one, listen to others and subscribe to our podcasts by following this link.

If you don't have iTunes and want to listen to other podcasts and read shownotes you can click here.

Shownotes:

Intro: The world has changed significantly since the Internet was first created. IPv6 gives over 4.3x1020 unique addresses for every square inch on the planet, and is going to allow us to do things we've only dreamed of in the past. In this podcast we give an overview of IPv6.

Mike: Gordon, before we get into the technology, can you give us an update on IPv6 history in the United States?

Sure Mike, this comes from a 1-minute history of the Internet by Federal Computer week at FCW.COM


Mike: So,
the federal government has ordered its agencies to become IPv6- capable by June of 2008 and this is going to happen in June on our federal government networks - how about businesses?

It's happening with business too Mike. Let's take Verizon as an example as quoted in a Light Reading post from last September.

Verizon Business, which began its first phase of deploying IPv6 on the public IP network in 2004, will complete the North America region in 2008 and move into the Asia-Pacific and European regions from late 2008 to 2009. The company will operate both IPv6 and IPv4, in what is known as a "dual stack" arrangement, on its multi protocol label switching (MPLS) network core. The company also has deployed IPv6 throughout its network access points (peering facilities) where Internet service providers exchange traffic.


Mike: So, what's the problem with IPv4?

It's a combination of a lot of things - Microsoft has a nice set of resources on IPv4 and IPv6 - let's use that as a guide:

The current version of IP (known as Version 4 or IPv4) has not been substantially changed since RFC 791 was published in 1981. IPv4 has proven to be robust, easily implemented and interoperable, and has stood the test of scaling an internetwork to a global utility the size of today’s Internet. This is a tribute to its initial design. However, the initial design did not anticipate the following:

The recent exponential growth of the Internet and the impending exhaustion of the IPv4 address space. IPv4 addresses have become relatively scarce, forcing some organizations to use a Network Address Translator (NAT) to map multiple private addresses to a single public IP address. While NATs promote reuse of the private address space, they do not support standards-based network layer security or the correct mapping of all higher layer protocols and can create problems when connecting two organizations that use the private address space. Additionally, the rising prominence of Internet-connected devices and appliances ensures that the public IPv4 address space will eventually be depleted. The growth of the Internet and the ability of Internet backbone routers to maintain large routing tables. Because of the way that IPv4 network IDs have been and are currently allocated, there are routinely over 85,000 routes in the routing tables of Internet backbone routers. The current IPv4 Internet routing infrastructure is a combination of both flat and hierarchical routing. The need for simpler configuration. Most current IPv4 implementations must be either manually configured or use a stateful address configuration protocol such as Dynamic Host Configuration Protocol (DHCP). With more computers and devices using IP, there is a need for a simpler and more automatic configuration of addresses and other configuration settings that do not rely on the administration of a DHCP infrastructure. The requirement for security at the IP level. Private communication over a public medium like the Internet requires encryption services that protect the data being sent from being viewed or modified in transit. Although a standard now exists for providing security for IPv4 packets (known as Internet Protocol security or IPSec), this standard is optional and proprietary solutions are prevalent. The need for better support for real-time delivery of data—also called quality of service (QoS).

While standards for QoS exist for IPv4, real-time traffic support relies on the IPv4 Type of Service (TOS) field and the identification of the payload, typically using a UDP or TCP port. Unfortunately, the IPv4 TOS field has limited functionality and over time there were various local interpretations. In addition, payload identification using a TCP and UDP port is not possible when the IPv4 packet payload is encrypted. To address these and other concerns, the Internet Engineering Task Force (IETF) has developed a suite of protocols and standards known as IP version 6 (IPv6). This new version, previously called IP-The Next Generation (IPng), incorporates the concepts of many proposed methods for updating the IPv4 protocol. The design of IPv6 is intentionally targeted for minimal impact on upper and lower layer protocols by avoiding the random addition of new features.


Mike: OK - can you list the primary features of IPv6? What makes it different?

Sure Mike - this list also comes from Microsoft's website. The following are the features of the IPv6 protocol:
  1. New header format
  2. Large address space
  3. Efficient and hierarchical addressing and routing infrastructure
  4. Stateless and stateful address configuration
  5. Built-in security
  6. Better support for QoS
  7. New protocol for neighboring node interaction
  8. Extensibility

Mike: Let's go through the list with a brief summary of each. Your first item on the list was the new header format. What's different?

Mike: How about number 2, large address space?

Mike: Number 3 was efficient and hierarchical addressing and routing infrastructure - can you describe?

Mike: How about number 4, stateless and stateful address configuration?

Mike: Number 5 was built-in security
.


Mike: How about number 6, better support for QoS?

Mike: And number 7, new protocol for neighboring node interaction?

Mike: And finally, number 8, extensibility.

Mike: Are there any other things you want to add to the list?

Mike: Are we ready?

I always look at the end devices (even though there is so much more) and, if we just look at desktops, you have to look at Microsoft.

Microsoft started with the following implementations of IPv6, all subsequent versions/products continue to support IPv6:
The IPv6 protocol for the Windows Server 2003 and later families.
The IPv6 protocol for Windows XP (Service Pack 1 [SP1] and later).
The IPv6 protocol for Windows CE .NET version 4.1 and later

The capture and parsing of IPv6 traffic is supported by Microsoft Network Monitor, supplied with Microsoft Server 2003 and later products.


Mike: This is a good overview - next week we'll get into some details on the IPv6 protocol!

Tuesday, March 11, 2008

Campus Internet Access: Shall We Seek Another Way?

Over the past 5 or 6 years I've been to a lot of different campuses scattered around the country. I can't think of one that I've been to recently that was not struggling with bandwidth issues. Students, faculty and staff on college campuses are like sponges when it comes to bandwidth - we soak up as much as the provider can supply.

Accessing bandwidth hungry applications during peak usage times can be very frustrating - especially if that application is part of a lecture or exam. In addition to the cost of bandwidth, colleges and universities are also responsible for installation, 24/7 maintenance and upgrading of the network infrastructure.

Perhaps it's time to consider another way. I've written in the past about successful public/private partnerships and today came across an interesting press release from AT&T. Here's a piece from that press release:

The University of Houston and AT&T Inc. (NYSE:T) today announced the nation's first planned deployment of AT&T U-verseSM services into student housing on a college campus. The cutting-edge TV and high speed Internet services will be included in every room of a 547,000-square-foot residence hall under construction for graduate and professional students.

These kinds of relationships make a lot of sense to me - a public university contracting with a private company to provide services. AT&T will be responsible for installing, maintaining and upgrading their network while the University of Houston will be responsible for teaching and student learning. I would think it also passes on a lot of BitTorrent/copyright liability from the University of Houston to AT&T...... it makes sense for both the university and AT&T to go this way for a number of reasons.

Also, from AT&T's perspective, it gets their products out there in students hands .... an impressed and satisfied student is a future satisfied residential/business/wireless customer.
Here's more from the press release:


"We are delighted that University of Houston students will be able to enjoy the same advanced AT&T U-verse services as an ever-expanding number of consumers across the Houston area," said Ed Cholerton, AT&T vice president and general manager for the Houston area. "We share the university's commitment to the best communications and entertainment technology."

What will be next? I'm figuring on a wireless access option for students via an AT&T wireless network at the University of Houston.

It will be interesting to see how many other academic institutions and providers move in this direction.

Peer-to-Peer File Sharing

[Here's a recent piece I wrote for my monthly technology column in La Prensa, a Western Massachusetts Latino newspaper. To read a few of my previous La Prensa technology columns go here.]

Peer-to-peer (commonly referred to as “P2P” or “PtP”) networks are commonly used to share music and video files on the Internet. Much of the illegal file sharing you hear about in the news is handled using P2P networks. These networks are also used for legal file sharing and, in some ways, they have got a bad name because of the sharing of copyrighted materials.

You may recall the early version of Napster, a software program developed by Northeastern University student Shawn Fanning in 1999. Napster worked using a variation of a P2P network (some call it hybrid P2P) that used a centralized server to maintain a list of who was online and who had which MP3 music files for sharing. Because Napster used a centralized server, it was easy to trace users and effectively shut the service down which the Recording Industry Association of America (RIAA) did in the fall of 2001, after filing a lawsuit against Napster.

As Napster was going through the legal battle, programmers were working to develop other file sharing programs that did not use a centralized server. The first of these new programs was named BitTorrent, and created by Bram Cohen in the summer of 2002.

Hundreds of additional P2P programs have been created and they are almost all based on the BitTorent model. Some of the more common BitTorrent type applications include Gnutella, Bearshare, Morpheous and FastTrack.

BitTorrent type programs are true P2P programs, using ad-hoc connections so there is no central server. Every computer running a P2P program provides storage space, bandwidth and processing. As more people install and run the P2P program, more files are being uploaded and downloaded and more computers are participating in the file sharing process.

Here’s details on how a P2P program works. Let’s say you want to download a song (let’s also say this song can be legally distributed) and you’ve got one of these P2P programs installed on your machine. You start the P2P program and type in the name of the song you want in a search box. The program then goes out and looks for other users sharing that song. As users are found the song starts to download to your computer. As more users sharing the same song are found, additional connections are made (each connection is often referred to as a torrent) and the download speed to your computer increases. Also, as you download the song, you start sharing the song with others connected to your computer.

Popular songs and videos can have hundreds of torrents involved in a single download.

If you use P2P programs, you need to be very careful to only download content that can be legally shared. If downloading illegal content, you can be caught and lately some huge fines have been given out to violators.

Also be sure you are running up to date antivirus software and scan your system for spyware weekly. P2P networks can be used to spread malicious software.

Sunday, March 9, 2008

The iPhone Software Development Kit Podcast

Mike Q and I recorded "The iPhone Software Development Kit" podcast today. Below are the show notes. You can listen directly by turning up your speakers and clicking here. If you have iTunes installed you can get this one, listen to others and subscribe to our podcasts by following this link.
If you don't have iTunes and want to listen to other podcasts and read shownotes you can click here.

Shownotes:

Intro: On Thursday, March 6, 2008, Apple released the iPhone Software Development Kit (SDK) beta along with the App Stores, a place where iPhone users will be able to get applications written for the iPhone. Apple also launched the Enterprise Beta Program.

Gordon: Mike, can you give us a quick rundown on what Apple released on Thursday?
Sure, much of our discussion today is based on an excellent post at macworld.com titled The iPhone Software FAQ. Macworld editors Jason Snell, Jonathan Seff, Dan Moren, Christopher Breen, and Rob Griffiths contributed to this article. They also thank Glenn Fleishman, Craig Hockenberry, and Daniel Jalkut for their feedback and contributions.

Here's how Macworld answered the question:

The SDK is a set of tools that lets independent programmers and software companies design, write, and test software that runs on the iPhone. Right now there's a beta version for developers, but a final version of the iPhone software that supports the installation of new programs written by independent programmers is due in late June.

As a part of the announcement, Apple introduced a new iPhone program, App Store, through which you'll be able to purchase, download, and update iPhone software. That will be available as part of the new iPhone Software 2.0 update in late June. That's when you'll be able to add third-party apps to your iPhone for the first time, at least via official channels.

Gordon: You blogged about you experience with the SDK - can you tell us your first experience?
I downloaded the new iPhone SDK and wrote about my first impressions. I did quite a bit of FORTRAN programming many years ago > 10, but haven't done a whole lot lately. The SDK took a long time to download -2 Gig - over my wireless connection. And about 45 minutes to install. I also downloaded a couple of the sample applications Apple provides ~ 1 Meg each. In about 15 minutes - would have been shorter if I knew what I was doing - I was able to open the sample, compile and run on the simulator Apple provides.

I have no doubt that this is going to have a huge impact on mobile application development. It's really easy and really cool. If you teach programming - I suggest you download the SDK today, install it in your labs, and have your kids developing and running native iPhone apps by Monday afternoon. Get the SDK here. Even better, download Jing have your students record the simulator running their iPhone apps and embed in your department or faculty webpage - great for marketing! Wish I was 20 again!

Gordon: And you actually compiled a little Kalimba (African Thumb Piano) app. Where can we have a look?
You can go to my blog at http://q-ontech.blogspot.com/2008/03/iphone-sdk.html

Gordon: Apple is taking 30% of what is sold from the App Store - will shareware apps be available or will we have to pay for everything?

That's a good question and one that was sort of answered in the macworld.com post. Macworld assumes Apple won’t let you sell a “free” program that requires an unlock code. However, there are some other scenarios we expect to see. First, donationware: People will probably sell “free” programs that request that you make a donation if you want to keep the project going. We don’t think Apple will have any problem with that, since the donation would be voluntary. Second, it’s possible that you’ll see two versions of various iPhone programs: a free “lite” version that’s a good advertisement for a more feature-rich for-pay version.

Macworld also mentions Iconfactory’s Twitterrific, a Mac program that is free, but contains ads. For an “upgrade” fee, users can shut off the ads. Whether Apple would allow this to be handled within the program or there would need to be two separate versions of an iPhone version of Twitterrific remains to be seen.

Gordon: On Thursday, five companies demo'ed applications - can you give us a brief summary of what was shown?
From Macworld: Five companies showed off what they were able to put together with two weeks of engineering work and very few people involved. There were games from Electronic Arts (Spore) and Sega (Super Money Ball), an AIM client from AOL, medical software from Epocrates, and business software from Salesforce.com. The programs took advantage of the iPhone’s built-in accelerometer, Multi-Touch capabilities, interface elements, and more.

Gordon: I'm going to go back to the Macworld post again and take some questions directly from that FAQ for you to answer:

1. What kind of stuff does Apple say it won’t allow developers to create?
2. What if someone writes a malicious program?
3. What’s a “bandwidth hog?”
4. Can I buy these programs on my Mac, or just on the iPhone?
5. What about software updates?
6. What if you’ve synced your phone on one computer and then restore it on another? Do you lose your apps until you sync to the original?
7. If I buy a program for my iPhone, can I also transfer it to my significant other’s iPhone?
8. Can I download programs off the Web, or any place other than the App Store and iTunes?
9. What about internal, “private” software? What about beta testing?
10. Can I try the iPhone SDK and how could it be used in the classroom?

Gordon: Apple posted a roadmap video - can you tell us a little bit about that?
You can watch Steve Job's presentation and see what's ahead at http://www.apple.com/quicktime/qtv/iphoneroadmap

We hope you enjoy this 48 minute podcast!

Thursday, March 6, 2008

Internet Protocol version 6.0: An Excellent White Paper

Yesterday, 3G Americas published an excellent white paper titled Transitioning to IPv6. The white paper is directed specifically for wireless providers and includes a lot of good content directed towards the transition. Here's a quote from a 3GAmerica press release about the white paper.

The white paper by 3G Americas addresses the problems that will occur when new IPv4 address blocks are no longer available. Service providers will face increasing capital expenses and numerous challenges when attempting to operate their networks efficiently on a limited number of IPv4 addresses. Not only does transitioning to IPv6 solve the address exhaustion problem, it will likely enable new services perhaps impossible in an IPv4-only world. The 3G Americas’ white paper strongly recommends that rather than wait for the inevitable difficulties to arise, service providers should begin planning their transition to IPv6 as soon as possible.

The white paper takes a good look at how wireless providers will move their networks to IPv6 and uses 3 detailed case study examples:

Case Study 1: Video Share service
Case Study 2: Gaming services
Case Study 3: Blackberry service

Using these case studies, the white paper provides recommendations on:

1. Developing a transition plan;
2. Using a phased approach;
3. Developing a solution for IPv4-IPv6 inter-networking, and;
4. Security considerations

Chris Pearson, President of 3G Americas, is quoted in the press release:

The need to transition to IPv6 is upon us. The Internet continues to expand at a rapid pace, with wireless devices becoming major users of IP addresses. Transitioning to IPv6 will take significant effort, but it can no longer be delayed.

The white paper is 23 pages long (including a great glossary) and provides some excellent reading/classroom material - I'll be using it in the advanced telecom course I'm teaching this semester. You can download it here.

Tuesday, March 4, 2008

U.S. Fiber to the Home (FTTH) Ranking = Eighth in World

Last week, the FTTH Council North America, Europe and Asia-Pacific released a world rankings document titled Fiber to the Home Deployment Spreads Globally As More Economies Show Market Growth.
The report lists 14 economies in the world that currently have more than 1 percent of households directly into fiber networks. According to the release:

The global ranking follows the unified definition of FTTH terms announced by the three councils last year, and which has formed the basis for recent market research by each council. For completeness and accuracy the ranking includes both FTTH and FTTB (fiber-to-the-building) figures, while copper-based broadband access technologies (DSL, FTT-Curb, FTT-Node) are not included.

The United States has doubled it's penetration rate to 2.3 percent over the past year, moving us up three places to eighth position. This doubling is no doubt based on the Verizon FiOS rollout in this country. [Click diagram to right for larger view]

Joe Savage, President of the FTTH Council North America, is quoted as follows:

“We’re delighted to see the U.S. moving up the global ranking, indicating a good beginning is underway. FTTH leadership, demonstrated by those leading countries, shows full national deployment is achievable. The future belongs to those countries that satisfy the broadband consumer’s need for speed. Our members – the FTTH equipment vendors and the service providers – are ready to help make it happen on a wide scale across North America.”

Here's a quote from Schoichi Hanatani, President of the FTTH Council Asia-Pacific:

"It is no accident that Asia-Pac continues to be the fastest growing region for FTTH in the world, with more subscribers connected on fiber than all other regions combined. The rollout of FTTH has been encouraged by forward-looking governments and regulators in the Asia-Pac region for several years now. They understand that FTTH is a key strategic national infrastructure."

Read the full release and get more information on the FTTH Council web site at www.ftthcouncil.org

Monday, March 3, 2008

Google's Trans-Pacific Fiber Optic Cable Project

Last week, on February 26, Google and 5 other international companies announced the Unity consortium. This group has agreed to run a 5 pair, 10,000 kilometer fiber optic communications cable connecting the United States and Japan. According to a Google press release, each fiber pair will be capable of handling up to 960 Gigabits pers second (Gbps) and the cable system will allow expansion up to eight fiber pairs.

At 5 pairs: (5 pairs)*(960 Gbps/pair) = (5 pairs)*(960x109 bps) = 4.8 x 1012 bps = 4.8 Terabits per second (Tbps)

At 8 pairs: (8 pairs)*(960 Gbps/pair) = (8 pairs)*(960x109 bps) = 7.68 x 1012 bps = 7.68 Terabits per second (Tbps)

The Unity consortium companies are:

Bharti Airtel - India's leading integrated telecommunications services provider.

Global Transit - A South Asian IP Transit network provider

Google - You know who they are!

KDDI - A Japanese information and communications company offering all communications services, from fixed to mobile.

Pacnet - An Asian company that owns and operates EAC-C2C, Asia's largest privately-owned submarine cable network at 36,800 km with design capacity of 10.24 Tbps.

SingTel - Asia's leading communications group providing a portfolio of services including voice and data services over fixed, wireless and Internet platforms.

By partnering with the providers, Google will be extending it's reach into the Asian markets - combined Bharti Airtel and SingTel have over 232 million mobile and landline customers. In addition, the system will connect into other Asian cable systems and reach more customers. Here's more from the google press release:

According to the TeleGeography Global Bandwidth Report, 2007, Trans-Pacific bandwidth demand has grown at a compounded annual growth rate (CAGR) of 63.7 percent between 2002 and 2007. It is expected to continue to grow strongly from 2008 to 2013, with total demand for capacity doubling roughly every two years.

It's interesting to see competing Asian market providers partnering in a system within a system, with each having ownership and management of individual fiber pairs - a testament to the power and influence Google has.

NEC Corporation and Tyco Telecommunications will build and install the system with completion by the first quarter of 2010.

Sunday, March 2, 2008

Motivated and Committed People = Outstanding Work

I've been back and forth to Dallas a couple of times the last two weeks - first for a futures conference presentation and this past week for a two day National Science Foundation (NSF) Advanced Technological Education (ATE) Convergence Technology Center at Collin College visiting committee meeting.

At the futures conference I spoke on Globalization - specifically how college courses need to morph to properly prepare students for today and tomorrow's work. The reception, hospitality and quality of the event were outstanding and I am so thankful I get invited to these kinds of events. I learn so much listing to other speakers and talking with attendees.

Last week was the two day visiting committee meeting - larger National Science Foundation grants are required to appoint a National Visiting Committee (NVC) that meets once a year. According to the NVC Handbook published by the Evaluation Center at Western Michigan University, these committees are groups of advisors that work with grantees and NSF to help them achieve their goals and objectives. They assess the plans and progress of the project and report to NSF and the project leadership. Committee members also provide advice to the project staff and may serve as advocates for effective projects.

At the NVC meeting, among many things, we had a lot of excellent discussion about current and future of converged communications and networks - what many are now calling unified communications/networking. I'd like to especially thank President Cary A. Israel and Executive Vice President Toni Jenkins from Collin College along with Director Ann Beheler, Ann Blackman, Helen Sullivan, etc, etc from the Convergence Technology Center at Collin College for their hospitality, commitment, work, understanding and dedication to their students. It's always wonderful to see excellent work being done - especially when it is funded with taxpayer dollars.

Here's one photo of NVC student lunch presenters (click to enlarge) taken on Thursday - each a different story and each incredibly EXCELLENT is all I can say. You can check out my iPhone Tumblr photoblog of both events (and a lot of other events) at http://gsnyder.tumblr.com/ - scroll down to see all photos.

I'll get back on my five per week (or so) blog schedule this week - I've got a bunch of them started and I'm not going anywhere for the next couple of weeks!

Thanks again to all at Collin College in Texas.