Carbon nanotubes are a term we here from time to time, but what exactly is it? Simply put it’s a sheet of graphene just one atom thick that is rolled up and this makes it lightweight and strong as steel. To most material scientists, they’re a near-perfect semi-conductor (a material that can conduct electricity, but which can also be shut off).
Recently Max Shulaker at the Massachusetts Institute of Technology (MIT) and his team created the first complex carbon nanotube microprocessor that sits atop a silicon wafer. A previous device had 178 transistors, Shulaker version has more than 14,000.
When compared to conventional silicon, carbon nanotube processors offer a couple of improvements. A processor made completely of carbon nanotubes will be 10 times more efficient. When it comes to the actual fabrication of the processor, silicon requires a temperature of 1000°C or higher, carbon nanotube transistors can be made at room temperature.
We are quickly reaching the end of what silicon can provide, carbon nanotubes may be what we turn to next as the limit for carbon nanotubes has yet to be discovered.
Startup microchip developer Cerebras on Monday unveiled the largest computer processing chip ever built. The chip called “Wafer-Scale Engine” (WSE) was designed for the company’s deep learning machine built to power AI systems. Most chips that are developed today can fit on the tip of your finger, Cerebras chip, on the other hand, measures 8 1/2 inches on each side.
The WSE’s surface contains 400,000 cores with 1.2 trillion transistors and compared to the Nvidia V100 chip, the WSE has 3,000 times the memory circuits.
The new WSE offers a couple of key advantages. The first and obvious one is computational power, with so many processing cores and chip memory the chip can handle thousands of models in the time it takes a conventional systems to handle just one. The other problem the WSE chip solves is there is no lag time between cores. Current servers with multiple chips slow down when sending data between the chips over the slower wires linking them on a circuit board.
Cerebras new chip will help to solve complex AI problems faster, cheaper, and use far less energy than current systems.
Imagine if you could take the heat generated by things we use everyday phones, laptops, cars, etc. and convert it into electricity. Well you might not have to imagine very long, University of Utah mechanical engineering associate professor Mathieu Francoeur has discovered a way to produce more electricity from heat than thought possible by creating a silicon chip that converts more thermal radiation into electricity.
Previously researchers determined that there is a theoretical limit to how much energy can be produced from heat which is known as the “blackbody limit”. However Francoeur and his team demonstrated that at the nanoscale level it’s possible to go beyond this blackbody limit.
This technology opens up the possibility to not only cool down portable devices, but also to gain as much as 50% more battery life. It can also increase the amount of electricity a solar panel can generate and help reduce the wear and tear of computer processors without needing a bulky heatsink and fan. Read the full article to learn more.
According to security firm Check Point, a new Android malware has already infected more than 25 million devices. The malware that researchers named “Agent Smith” doesn’t steal data from users. Instead it rewrites the code of apps and forces them to display more ads or takes credit for the ads they already display so that the malware’s operator can profit off the fraudulent views.
Although the malware primarily infected devices in India with an estimated 15 million devices infected in that region alone, Check Point says the malware has infected more then 300,000 devices in the US. The malware’s creator attempted to add 11 apps into the Google Play Store, luckily accord to Check Point, Google removed all the malicious apps before they went live.
Check Point writes “This application was as malicious as they come” and seems to be run by a Chinese company that claims to help developers publish their apps.
What was once intended for educational purposes when it was released back in 2012 is now the smallest desktop replacement computer. Recently the makers of the Raspberry Pi released version 4 of the popular credit card sized computer. What makes this version different is it has been upgraded with a 1.5GHz 64-bit quad-core ARM Cortex-A72 CPU and up to 4GB of DDR4 RAM. This makes version 4 comparable to a entry level desktop computer while only using 15 watts of power.
The Raspberry Pi 4 is the first to be powered by USB-C, as well as, being able to drive two 4K monitors via the two micro HDMI ports. Other new features include a Gigabit Ethernet port and 2x USB 3.0 ports.
After decades of research, scientists from Lancaster University in the UK have invented and patented what’s best described as universal computer memory.
The are two things that make this new memory revolutionary, the first is the data doesn’t have to be refreshed constantly. With traditional memory, the data is volatile and must be continuously ‘refreshed’ to avoid it being lost. This ability, makes it possible for computers that do not need to boot up and could instantaneously and imperceptibly go into sleep mode.
Universal Computer Memory’s second advantage is that it uses an incredibly small amount of energy. The amount is about a 100 times less energy than current memory.
Over the past 30 years or so the transistor (tiny switches that make up a processor) has consistently been shrinking with recent models down to 7 nanometers or about 20 atoms wide. We are fast approaching the physical minimum for transistor size. When that happens, the question is how do we get faster processing speeds?
Conventional transistors can convey just two values of information: As a switch, a transistor is either on or off, which translates into the 1s and 0s of binary language. One way to increase processing capacity without adding more transistors would be to increase how much information each transistor conveys by introducing intermediate states between the on and off states of binary devices.
Dr. Kyeongjae Cho, professor of materials science and engineering at The University of Texas at Dallas and his team have developed a multi-value logic transistor based on zinc oxide. Cho’s device is capable of two electronically stable and reliable intermediate states between 0 and 1, boosting the number of logic values per transistor from two to three or four. This creates a huge benefit, processors will be able to achieve incredible speeds without adding more transistors. Another benefit to this technology is that it’s compatible with existing computer-chip configurations.
Wi-Fi 6 also known as (802.11ax) is the next step in wireless innovation. The rollout of 802.11ax has already started and allows enterprises and service providers to support new and emerging applications on the same wireless LAN (WLAN) infrastructure while delivering a higher grade of service.
One benefit of Wi-Fi 6 is a 8x speed boost over current Wi-Fi systems to a theoretical maximum of 9.6Gbps. This speed boost opens up new kinds of applications that once needed to be tethered via Ethernet such as virtual reality (VR) and the ability to stream games and video in 4K or even 8K resolutions. Another added benefit is Wi-Fi 6 supports both 5GHz and 2.4GHz bands, meaning it’s backward compatible to all past versions of Wi-Fi.
Wi-Fi 6 consumer routers are already available from the major manufacturers, Asus, Netgear, and TP-Link with a price range of $300 – $500.
At this years Computex, AMD (Advanced Micro Devices) debuted its third generation of Ryzen processors. This consumer-level processor is built around a new 7nm “Zen” core which features 2x the floating point, 2x the cache size, and up to 15% more IPC (instructions per clock cycle). Ryzen 3rd Generation is also the world’s first processor to support PCIe 4.0.
PCIe 4.0 is a huge leap over its predecessor in that it offers twice the bandwidth, allowing 32GB of data transfer in both directions versus 16GB on PCIe 3.0. This extra bandwidth coupled with a PCIe 4.0 graphics card (such as the Navi RX 5000 series coming soon) can provide up to a 69% performance boost over an Nvidia RTX 2080 Ti as noted in the keynote speech by AMD CEO Lisa Su.
When compared to an Intel i7 9700K, the Ryzen 7 3700X which features 8 cores, 16 threads, a TDP of 65 watts, a base clock of 3.6 GHz and a boost clock of 4.4 GHz, performed 33% faster in Cinebench R20.
The 3rd Generation Ryzen family of processors will be available on July 7, 2019, with prices ranging from $329 for the Ryzen 7 3700x up to $499 for Ryzen 9 3900X.
Researchers at Nvidia (known for their graphics chips) have developed a unique autonomous car. What makes this autonomous vehicle special is that it doesn’t follow instructions provided by an engineer or programmer. It instead relies on an algorithm that taught itself to drive by watching a human do it.
A network of artificial neurons processes data from the sensors of the vehicle and then delivers the commands to operate the steering wheel, brakes, and other systems. The result seems to match the responses you’d expect from a human driver. As amazing of an accomplishment this is there is an unsettling component which is, it isn’t clear exactly how the car makes its decisions. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action.
This points to a dark side of artificial intelligence not just in vehicles, but AI in general. The car’s AI technology, known as deep learning, has proven to be very powerful at solving complex problems. The hope is deep learning will be able to diagnose deadly diseases, make trading decisions, and help transform whole industries. But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. That’s one reason Nvidia’s car is still experimental. Click here to learn more.