This blog will now be located at:
This blog will stay up but will not be updated. Go to the above link for future updates.
This blog will now be located at:
This blog will stay up but will not be updated. Go to the above link for future updates.
We come to you
We offer prompt, professional, courteous service with over twenty years of experience dealing with residential and small business clients offering them solutions and fixing their computer and network issues at reasonable rates.
Services we provide:
Please call if you have any questions about other services we may provide.
We service Angier, Apex, Cary, Duncan, Fuquay Varina, Garner, Holly Springs, Lillington, Raleigh, Willow Spring and surrounding NC areas.
Voted best of 2015 and 2016 and rated top Pro 2017 and 2018 on Thumbtack .
© 2013 - 2018 NC Computer Tech
There’s a standard in the works for ethernet gear to feed faster Wi-Fi access points, but with rival industry groups pushing two different specifications, it might take a while to finish.
Wi-Fi is getting fast enough that Gigabit ethernet can’t keep up with the most advanced access points, which use 802.11ac Wave 2 technology. Users could go to 10-Gigabit ethernet, but for most that would require installing more advanced cable. So the search is on for something in between that works on the most common kinds of cable over at least 100 meters.
Most likely you’ll have a choice of 2.5Gbps (bits per second) and 5Gbps, and there’s no debate there. Some vendors have already announced components and designs for such products, but there’s no guarantee that systems built with parts from the two camps will work together. Enterprises want to be able to mix and match gear from any vendor they like, so the official IEEE group for ethernet standards voted last month to form a task group to set a standard.
Now, the two rival camps will have to work out which technologies go into the standard and which don’t. This isn’t the first time that competing teams of companies have pushed different approaches before a common specification is set, but that kind of rivalry sometimes leaves potential buyers waiting.
One of the groups involved, the MGBase-T Alliance, officially announced its membership on Monday. Members include Avaya, Aruba Networks and Brocade Communications, as well as component vendors Broadcom and Freescale Semiconductor. The MGBase-T Alliance was formed in June.
The other big player is the NBase-T Alliance, formed in late October by Cisco Systems, Xilinx, Freescale and Aquantia, a company that’s already making 2.5G/5G components.
Enterprises will probably be able to buy 2.5G/5G equipment starting in the second quarter of next year, Dell’Oro Group analyst Alan Weckel said. Considering that the task group to hash out the standard hasn’t even met yet, there’s a good chance those products will be based on pre-standard technology.
When vendors jump the gun, it can complicate and slow down standards development. The IEEE 802.11n specification took several years to set, partly because Wi-Fi manufacturers shipped pre-standard products to tap into demand for faster wireless LANs. The wait went on so long that eventually the Wi-Fi Alliance began certifying products based on a late-stage draft of the standard, the first time it had done so.
For a proposed IEEE standard to pass, it needs three-quarters of the votes in the task group. Officially, the members represent themselves and not their employers. But when vendors are divided up into separate camps, that consensus can be harder to reach.
“I’d like to say this is just noise, but the reality is, it’s not,” said John D’Ambrosia, chairman of the ethernet Alliance industry group. “There clearly are players lined up on both sides.”
Members of the ethernet Alliance, which promotes IEEE ethernet standards, agree there needs to be a single specification, D’Ambrosia said. “We’ve reached out to these groups to emphasize that point,” he said.
One good thing about the competing efforts is that they show there’s strong demand for the technology, he added.
Joe Byrne, senior manager for digital networking at Freescale, is hopeful the showdown won’t take long. His company joined both groups because it wants to be able to serve all system makers.
“One of the groups will end up building more momentum behind it,” by recruiting more big-name vendors, reaching out to users and participating in the standards process, Byrne said. “Things could get tipped pretty early on, even though there are competing standards. It’s to nobody’s benefit if it gets protracted too long.”
Ever heard of a “zettabyte?” It’s the same as 1,000 exabytes or, in more familiar terms, 1,099,511,627,776 gigabytes. By 2018, global IP traffic (fixed and mobile) will hit 1.6 zettabytes, Cisco claims in a new report—nearly three times the total traffic carried in 2013.
Put another way, 2018’s global IP traffic—more than 1.5 trillion gigabytes’ worth—will be more than all the IP traffic that was carried globally between 1984 to 2013. That was a comparatively unimpressive 1.3 zettabytes.
More relevant to those of us who are watching more of our video online is this factoid from Cisco: We’re going to be contributing to that boost in traffic. Cisco says IP video will account for 79 percent of all IP traffic by 2018. That’s up from 66 percent in 2013.
Ultra HD video—also known as 4K, which is four times the resolution of 1080p HD—will comprise 11 percent of IP video traffic by 2018, up from 0.1 percent in 2013. HD video will cover 52 percent of IP video traffic by then; up from 36 percent last year. SD (standard definition) video will account for the rest.
So will that mean more feuds between content providers and distributors like the public finger-pointing we’ve seen recently from Netflix and Verizon? After all, the increased amount of streaming video is putting a strain on network traffic—or so Internet service providers would argue. And that’s before the kind of increases that Cisco is forecasting in its report.
At least, broadband speeds are expected to improve. Cisco says that world-average broadband speeds will reach 42 Mbps by 2018, up from 16 Mbps at the end of last year. 55 percent of broadband connections will be faster than 10 Mbps by 2018. Japanese and South Korean average broadband speeds will approach 100 Mbps by that time.
Intel could soon bring to market a faster version of its Thunderbolt connector technology with a throughput of 50Gbps, but the company is biding its time until there is a need for faster connectors.
Thunderbolt technology connects computers to peripherals like external hard drives at much faster speeds than USB 3.0. Thunderbolt ports are found in Macs and select Windows PCs, but are more expensive than USB technology, which is found in most PCs that ship today.
The latest iteration, Thunderbolt 2, transfers data at 20Gbps (bits per second). Intel is researching how to speed up Thunderbolt, but the company could potentially develop and deploy a faster connector for PCs based on silicon photonics technology, which combines silicon components with optical networking, said Mario Paniccia, Intel fellow and general manager of silicon photonics operations.
The thin connector could move data at speeds of 25Gbps for each fiber in the cable and scale up to 50Gbps with two fibers, which Paniccia said is sufficient for consumer devices. But he also said the throughput provided by the current Thunderbolt 2 is enough and there is no need for a drastic upgrade any time soon.
“When the need is there for higher speed—25Gbps or above—we will aggressively go after that market,” Paniccia said.
Instead of consumer devices, Intel is introducing silicon photonics into data centers through optical cables called MXC. The cables, which could have up to 64 fibers, can transfer data at net speeds of up to 1.6Tbps, and stretch up to 300 meters between servers or other data-center equipment. Technology from those cables could be scaled down and adapted for a Thunderbolt successor, Paniccia said.
Intel will need to talk to companies making cables to bring Thunderbolt successor cables to market, Paniccia said, but cautioned that such cables could be “years away.”
Intel had previously said it would bring a 50Gbps Thunderbolt replacement to market by 2015. Thunderbolt, originally named “Light Peak,” was envisioned as an optical technology, but the first cables introduced in the Mac in 2011 were based on copper. Optical cables were later added, but copper cables remain dominant.
Adoption of the interconnect technology has been slow as the cables and connectors are expensive, said Nathan Brookwood, principal analyst at Insight 64.
“People have already been complaining since it first appeared,” Brookwood said.
Connectors based on silicon photonics are more relevant to data centers because of the economics, Brookwood said. With the fast throughput, one MXC cable can replace many copper connectors.
Thunderbolt was believed by many to be likely to put a dent in the adoption of the slower USB 3.0, but that has not happened, with most PCs still shipping with USB 3.0 and USB 2.0 ports. USB 3.0 provides data transfer rates of up to 5Gbps, but the latest USB 3.1 specification, announced last year, is 10Gbps.
Intel is also working on low-power Thunderbolt technology for mobile phones and tablets, which largely still use slower micro-USB 2.0 ports. But Intel has said mobile Thunderbolt adoption could be blunted by WiGig, which can transfer data wirelessly at a rate of up to 7Gbps (bits per second).
Intel has researched silicon photonics technology for more than a decade. The first MXC optical cables are thinner than competitive copper cables used in data centers, Intel has said. The cables can support a range of protocols including ethernet and PCI-Express 3.0.
Copper cables are slowly being replaced and will ultimately give way to optical fibers, Paniccia said. But from a product perspective, the company is currently focusing silicon photonics on the data-center market, Paniccia said.
Back in February 2011, when the global Internet Assigned Numbers Authority (IANA) allocated the last blocks of IPv4 address space to the five regional Internet registries (that further distribute IP addresses), many experts warned of a fast approaching crisis that would severely affect Internet connectivity.
It was believed that the available IPv4 addresses would exhaust within months. But today, three years later, American Registry for Internet Numbers (ARIN) is still bestowing IPv4 addresses in the US and Canada. In an article titled Whatever happened to the IPv4 address crisis?, Lee Schlesinger, Network World’s former test center director shared his views on how far can we go this way and why the IPv6 adoption rate is still slow.
According to John Curran, President and CEO of ARIN, the organization still has “approximately 24 million IPv4 addresses in the available pool for the region,” which he predicts will be handed out by “sometime in 2014”. But that doesn’t mean the shortage will begin this year, as addresses will still be available to be assigned to operators’ clients for a while longer. Moreover, stable networks (that aren’t expanding) would easily reuse addresses, Schlesinger argues.
So, what happened to the crisis everyone was talking about? Well, it’s just been pushed out due to multiple factors. Schlesinger touches upon some of them, including use of carrier-grade network address translation (CGNAT), carriers directly purchasing IP addresses from each other, and ARIN reclaiming unused addresses.
Coming to IPv6, Schlesinger tries to answer an important question: If this new IP version is better, and has the ability to provide 2^128 IP addresses, why hasn’t everyone just switched over to it?
Firstly, it isn’t backward compatible with IPv4, and needs to be implemented end to end. Secondly, as John Brzozowski, fellow and chief architect for IPv6 at Comcast Cable puts it: “Service providers didn’t want to implement IPv6 because the content providers weren’t there, and content providers didn’t want to implement it because the service providers weren’t there”, a classic chicken-and-egg problem.
Despite of all the hurdles, Schlesinger says that there is steady progress in IPv6 adoption and implementation. He supports his argument through various statistics, including a statistical graph from Google, which shows that the percentage of users that access Google over IPv6 has increased over the past few years.
Schlesinger believes that IPv4 addresses will remain in use for some time to come, and concludes by quoting Phil Roberts, technology program manager for the Internet Society, who says, “It takes a while to transition. After all this is done it would be a great graduate thesis for someone to see why it has taken so long”.
You can read Lee Schlesinger’s full story here.
Several people have come forward to report wireless problems with select Sony Vaio notebooks – particularly the Vaio Fit – over the past few months. Issues range from capped speeds over Wi-Fi to signal strength significantly diminishing the further the system is from a router.
One of the most prominent threads on Sony’s support forum seems to suggest the issue stems from the Broadcom wireless card (BCM43142) inside the machine. One user reported a max signal strength of just two bars with speed tests resulting in just 1MB/sec download speeds and anywhere between 1-5MB/sec upload.
Sony issued a series of updates for the wireless card about a week ago according to Vaio Fit owners but these did little to quell the connection issues. Multiple users said they have contacted customer support and have been offered numerous fixes – most of which didn’t work. The only solution it seems is to physically replace the Broadcom card with a unit from Intel (Centrino 6235).
Replacing the Wi-Fi card is a simple enough procedure that could even be accomplished by those that aren’t very tech-savvy but having to do so on your own dime on a brand new notebook is uncalled for. We’ve reached out to Sony for a comment on the issue but we have not received a reply as of writing. If we do hear back, we’ll update this story accordingly.
Do you own a Sony Fit laptop that has experienced similar Wi-Fi issues? If so, have you had any luck with Sony customer support in solving the problem?
In a move to bolster net neutrality across Europe, EU officials are discussing plans which would prohibit ISPs from blocking or throttling online sites and services. Currently, the Netherlands is one of the few areas to implement their own strict net neutrality policies; the vast majority of EU countries do not.
In speech addressed to the European parliament in Brussels, the European Commission’s Digital Agenda VP Neelie Kroes argued that “some ISPs deliberately degrade” and even outright block network services like Skype and Whatsapp for anti-competitive reasons. Kroes is looking to regulators to craft policies which curb these dubious practices without stifling competition and innovation but meanwhile encouraging transparency and consumer choice.
“But equally it’s clear to me that many Europeans expect protection against such commercial tactics. And that is exactly the EU safeguard we will be providing. A safeguard for every European, on every device, on every network: a guarantee of access to the full and open internet, without any blocking or throttling of competing services.”
Source: EC Digital Agenda VP Neelie Kroes
ISPs often argue a legitimate need to prioritize certain types of network traffic over others, a practice which is often referred to as traffic shaping. Analyzing data packets to judge their network importance and prioritizing them accordingly is a useful tool for halting service-crippling DDoS attacks and ensuring latency-sensitive services like Vo-IP operate smoothly.
If all goes well and Kroes’ recommendations take wing with EU lawmakers, a set of fully fleshed-out regulations still may not be in place across EU member states until 2015.
Net neutrality and what it should mean precisely has remained a contentious issue since Comcast was accused of throttling P2P network traffic in 2007. In the U.S, the FCC released a set of guidelines in 2010 in an effort to promote net neutrality, but a federal court ruled that the FCC did not have the power to enforce such rules.
Netflix has released a list ranking the major Internet Service Providers in the United States running its service — and Google Fiber is at the top of the heap.
Ken Florance, vice president of content delivery at Netflix, lavished praise for the Internet giant in a blog post on Tuesday, describing Google Fiber as “the most consistently fast ISP in America, according to actual user experience on Netflix streams in November.”
Everyone else didn’t get quite the same attention. Here’s more:
Broadly, cable shows better than DSL. AT&T U-verse, which is a hybrid fiber-DSL service, shows quite poorly compared to Verizon Fios, which is pure fiber. Charter moved down two positions since October. Verizon mobile has 40% higher performance than AT&T mobile.
To put this all into perspective, Florance specified that Netflix’s user base of approximately 30 million members streams more than one billion hours of Netflix content per month.
Thus, he asserted that this data should be considered “very reliable” in how it compares ISPs in terms of real world performance.
Here is Netflix’s full list of the top 21 Internet Service Providers running the online rental service during the month of November 2012: