Tuesday, 6 August 2013

COMPUTER BOOSTS

Thanks to research being conducted at the California Institute of Technology, regular microscopes could soon be capable of much higher-resolution imaging. Instead of making changes to the microscopes’ optics, the Caltech researchers are instead focusing on using a computer program to process and combine images from the devices.
The main hardware change to an existing microscope involves installing an array of about 150 LEDs beneath the stage, in place of the regular light. Using each bulb in that array one at a time, 150 images are then acquired of the sample that’s being viewed. In each image, the light is originating from a slightly different (and known) direction. The computer program then stitches all of those images together into one cohesive image of the sample.
A rendering of the Caltech system
That composite image represents not only the light’s intensity, but also the light phase information (related to the angle at which the light travels) for each of the sub-images. Using that light field data, the program allows users to zoom in on any part of the overall image, while still being able to make out details. It’s also able to digitally correct for flaws, such as areas which are initially out of focus.
Ultimately, images produced by the system contain 100 times more information than those produced by an unaided microscope. Additionally, it creates images with both the wide field of view of a lower-powered lens, and the resolution of a stronger one. Ordinarily, microscope users have to choose between getting wide shots of samples in which details can’t be made out, or detailed shots of just a small part of the sample – sort of like using either a wide-angle or close-up lens on a camera.
It should cost approximately US$200 to add the technology to one existing microscope. The scientists hope that it could be used in applications such as digital pathology, wafer inspection and forensic photography, or by medical clinics in developing nations

Artificial Memories

An ongoing collaboration between the Japanese Riken Brain Science Institute and MIT’s Picower Institute for Learning and Memory has resulted in the discovery of how to plant specific false memories into the brains of mice. The breakthrough significantly extends our understanding of memory and expands the experimental reach of the new field of optogenetics.
The ability to learn and remember is a vital part of any animal's ability to survive. In humans, memory also plays a major role in our perception of what it is to be human. A human is not just a survival machine, but also reads, plans, plays golf, interacts with others, and generally behaves in a manner consistent with curiosity and a need to learn.
Forgetting where we put the keys is a standard part of the human condition, but in the last few decades our knowledge of more serious memory disorders has grown rapidly. These range from Alzheimer's disease, where the abilities to make new memories and to place one's self in time are seriously disrupted, to Post-Traumatic Stress Disorder, in which a memory of a particularly unpleasant experience cannot be suppressed.
Such disorders are a powerful force driving research into discovering how healthy memory processes function so that we can diagnose and treat dysfunctional memory function.
In previous work, the team of researchers at the Picower Center for Neural Circuit Genetics were able to identify an assembly of neurons in the brain's hippocampus that held a memory engram, or data concerning a sequence of events that had taken place previously. In recalling a memory, the brain uses this data to reconstruct the associated events, but this reconstruction generally varies slightly to substantially from what actually occurred.
The researchers were able to locate and identify the neurons encoding a particular engram through the use of optogenetics. Optogenetics is a neuromodulation technique that uses a combination of genetic modification and optical stimulation to control the activity of individual neurons in living tissue, and to measure the effects of such manipulation.
The MIT team genetically engineered the hippocampal cells of a new strain of mouse so that the cells would form a light-sensitive protein called a channelrhodopsin (ChR) that activates neurons when stimulated by light. This involved engineering the mice to add a gene for the synthesis of ChR, but that gene was also modified so that ChR would only be produced when a gene necessary for memory formation was activated. In short, only neurons actively involved in forming memories could later be activated by light.
Initial work using the genetically engineered mice focused on determining what neurons in the hippocampus are associated with forming a new, specific memory. There were at least two schools of thought on how memory engrams were stored – locally or globally. They discovered that a memory is stored locally, and can be triggered by optically activating a single neuron.
"We wanted to artificially activate a memory without the usual required sensory experience, which provides experimental evidence that even ephemeral phenomena, such as personal memories, reside in the physical machinery of the brain,” says lead author Steve Ramirez.
The new results came from a chain of behavioral experiments. The researchers identified the set of brain cells that were active only when a mouse was learning about a new environment. The genes activated in those cells where then coupled with the light-sensitive ChR.
These mice were then exposed to a safe environment in a first box, during which time the neurons which were actively forming memories were labelled with ChR, so they could later be triggered by light pulses.
Next the mice were placed in a different chamber. While pulsing the optically active neurons to activate the memory of the first box, the mice were given mild foot shocks. Mice are particularly annoyed by such shocks, so this created a negative association.
When the mice were returned to the first box, in which they had only pleasant experiences, they clearly displayed fear/anxiety behaviors. The fear had falsely become associated with the safe environment. The false fear memory itself could be reactivated at will in any environment by triggering the neurons associated with that false memory.
Cartoon of the MIT-Riken experiment. In the left-hand box, the mouse learns the safe envir...
“Remarkably, the recall of this false memory recruited the same fear centers that natural fear memory recall recruits, such as the amygdala,” says Xu Liu, a post-doctoral fellow and co-first author of the study. The recall of this false memory drove an active fear response in associated parts of the brain, making it indistinguishable from a real memory. “In a sense, to the animal, the false memory seems to have felt like a ‘real’ memory,” he said.
These kinds of experiments show us just how reconstructive the process of memory actually is,” said Steve Ramirez, a graduate student in the Tonegawa lab and the lead author of the paper. “Memory is not a carbon copy, but rather a reconstruction, of the world we've experienced. Our hope is that, by proposing a neural explanation for how false memories may be generated, down the line we can use this kind of knowledge to inform, say, a courtroom about just how unreliable things like eyewitness testimony can actually be." Perhaps they can also provide a solution for the problem of lost keys.
If, like me, you've always found ping pong a little lacking in flashing lights, Pingtime, an augmented reality project created for the 2013 Rokolectiv Festival in Bucharest, may just take your fancy. Conceived by Sergiu Doroftei, the arts project augments an ordinary table tennis table with projections and sounds by equipping the paddles with sensors and using an infrared camera to track the ball.
Of course hardware alone does not a box of AR tricks make. The team behind the project had to implement some software wizardry too, using the vvvv programming environment for graphics and sound, and the OpenCV computer vision library to help keep an eye on the ball.
The overall effect is striking, with the surface of the table effectively becoming a giant display: a canvas on which to paint lights and colors coordinated with the ensuing ping-pongery. Judging by the video, the whole effect does seem to make the game much harder to play (perhaps as a result of the relative darkness required for the light show as much as anything), but perhaps this is the point.
"Pingtime takes a look into how realtime generated computer responses are affecting reaction time in fast gameplay situations," the video's description goes.
You can see the mesmerizing Pingtime in action by gaming.

Graphene Super Capacitor

Graphene-based supercapacitors have already proven the equal of conventional supercapacitors – in the lab. But now researchers at Melbourne’s Monash University claim to have developed of a new scalable and cost-effective technique to engineer graphene-based supercapacitors that brings them a step closer to commercial development.
With their almost indefinite lifespan and ability to recharge in seconds, supercapacitors have tremendous energy-storage potential for everything from portable electronics, to electric vehicles and even large-scale renewable energy plants. But the drawback of existing supercapacitors has been their low energy density of around 5 to 8 Wh/liter, which means they either have to be exceedingly large or recharged frequently.
Professor Dan Li and his team at Monash University’s Department of Materials Engineering has created a graphene-based supercapacitor with an energy density of 60 Wh/liter, which is around 12 times higher than that of commercially available supercapacitors and in the same league as lead-acid batteries. The device also lasts as long as a conventional battery.
To maximize the energy density, the team created a compact electrode from an adaptive graphene gel film they had previously developed. To control the spacing between graphene sheets on the sub-nanometer scale, the team used liquid electrolytes, which are generally used as the conductor in conventional supercapacitors.
Unlike conventional supercapacitors that are generally made of highly porous carbon with unnecessarily large pores and rely on a liquid electrolyte to transport the electrical charge, the liquid electrolyte in Li’s team’s supercapacitor plays a dual role of conducting electricity and also maintaining the minute space between the graphene sheets. This maximizes the density without compromising the supercapcitor’s porosity, they claim.
To create their compact electrode, the researchers used a technique similar to one used in traditional paper making, which they say makes the process cost-effective and easily scalable for industrial applications.
"We have created a macroscopic graphene material that is a step beyond what has been achieved previously. It is almost at the stage of moving from the lab to commercial development," Professor Li said.

Niac Phase

A dozen inventors have received a chance to demonstrate the potential for their pet space projects as winners of NASA's 2013 Innovative Advanced Concepts (NIAC) Program Phase I awards. The winners were chosen based on their potential to transform future aerospace missions by enabling either breakthroughs in aerospace capabilities or entirely new missions. Read on for a closer look at some of the most promising proposals with a view to how they would work, and where the tricky bits might be hiding.
Each NIAC Phase I winner receives about US$100,000 to spend a year pursuing their ideas, including an initial feasibility study of a novel aerospace concept. The proposals this year include; 3D printing of biomaterials; using galactic rays to map the insides of asteroids; and an "eternal flight" platform that could hover in the Earth's atmosphere.
    The list of this year's awardees includes:
  • Rob Adams of NASA Marshall Space Flight Center – Pulsed Fission-Fusion (PuFF) propulsion system
  • John Bradford of SpaceWorks Engineering – Torpor inducing transfer habitat for human stasis to Mars
  • Hamid Hemmati of NASA Jet Propulsion – Two-dimensional planetary surface landers
  • Nathan Jerred of Universities Space Research Association - Dual-mode propulsion system enabling CubeSat exploration of the Solar System
  • Anthony Longman – Growth adapted tensegrity structures
  • Mark Moore of NASA Langley Research Center - Eternal flight as the solution for 'X'
  • Thomas Prettyman of the Planetary Science Institute – Deep mapping of small solar system bodies with galactic cosmic ray secondary particle showers
  • Lynn Rothschild of NASA Ames Research Center – Biomaterials out of thin air
  • Joshua Rovey of the University of Missouri – Plasmonic force propulsion revolutionizes Nano/PicoSatellite capability
  • Adrian Stoica of NASA Jet Propulsion Lab – Transformers for extreme environments
  • Christopher Walker of the University of Arizona – 10 meter sub-orbital balloon refletor
  • S.J. Ben Yoo of the University of California-Davis – Low-mass planar photonic imaging sensor
Let's take a look at three of the most promising concepts with a view to how they would work, and where the tricky bits might be hiding.

Rob Adams' PuFF pulsed fission-fusion propulsion system

The PuFF propulsion system is a new take on an old idea. To confine a deuterium-tritium plasma to act as a breakeven reactor. People have been trying this seriously for half a century and have not yet succeeded. To base a space drive on such a thing would be extremely speculative.
Rob Adam's PuFF drive in which DT gas is fed into the SCF plasmoid, which in turn is surro...
In the PuFF approach, however, the fusion of the deuterium-tritium fuel is only the first stage of the process. Instead of seeking a particular power output, the fusion reaction is being carried out to provide a source of neutrons. This D-T reaction releases a 3.5 MeV alpha particle, and a neutron with 14.1 MeV of kinetic energy.
The fusion-fission drive concept on the nuclear level (Photo: NASA)
As seen above, in a second stage of nuclear reaction the fusion neutrons can be captured by a uranium nucleus, thereby causing it to fission, releasing some 200 MeV of nuclear energy. Because of the high energy of the fusion neutrons, four to five neutrons will generally be released from uranium fission, rather than the two to three seen with thermal neutrons.
If you send a neutron into a critical mass of fissile material, the resulting chain reaction continues until the critical mass explodes. However, if you have a bit less than a critical mass, the total number of fissions resulting from the input of a swarm of fission neutrons is rather impressive.
Impact of 1,000 fusion neutrons on uranium nuclei will initially cause 1,000 uranium atoms to fission. This will release about 5,000 neutrons in the uranium, owing to the large energy of the fusion neutrons. If the fissile material is one percent away from being a critical mass, some of these neutrons will escape the uranium, but enough will cause fissions that produce 0.99 times 5,000, or 4,950 neutrons. This requires about 1,980 fissions. In the next step, the 4,950 neutrons cause fissions that produce 0.99 times 4,950 neutrons, or 4,900 neutrons, which requires 1,960 fissions.
As the chain goes on, it eventually runs out of steam, as shown by the reduction in the number of neutrons. However, in the course of the not quite critical chain reaction, roughly 200,000 uranium nuclei will have undergone fission. Uranium fission releases about 200 MeV of energy. The original 1,000 fusions that produce the 1,000 free neutrons releases about 19 GeV, but the resulting fissions release about 40,000 GeV. Coupling the fusion neutrons into a not quite critical mass of uranium results in an amplification of the fusion power by a factor of about 2,100, providing plenty of power for a spacecraft drive!
A pulsed drive based on the fusion-fission combination process need not achieve fusion breakeven. Instead, the focus is on fusion-based neutron generation followed by fission-based neutron multiplication. The very largest inertial confinement machines at present produce tens of megajoules per pulse. If the neutron output were directed into a PuFF-type fusion-fission drive, the total nuclear output per pulse could easily be 10 GJ, or about 3 MWh energy release each second – probably 1,000 times the power needed to power a spaceship drive, leaving plenty of room for engineering compromise.

Nathan Jerred's Small-scale Dual-mode Propulsion System

A considerable number of exploratory designs have surfaced, with the common intent of using the nano/pico satellite/probe concept past low-earth orbit. Most of these are single-principle drives, which would be found lacking under some circumstances. For example, a CubeSat might be able to reach velocities required for interplanetary travel using a solar powered ion drive. However, a large number of trajectories would not be feasible because the ion drive does not provide enough thrust for course alteration, course correction, orbital insertion, or other astronavigation challenges.
In such nanoprobes it seems unlikely that two independent propulsion systems can be shoehorned into place while still having adequate performance for interplanetary missions. Jerred's dual-mode propulsion system is a new attempt to address this problem.
Nathan Jerred's dual-mode propulsion system shares most components to enable a high-thrust...
The two modes of which he speaks are a thermal drive and an ion drive. The source of power for both would be radioactive decay, probably of a mass of plutonium 238 (Pu-238). Such radioactive sources have been used in many space missions to provide a heat source for a radioisotope thermoelectric generator (RTG). An RTG powers the Cassini mission, the Mars rover Curiosity, and the New Horizons mission to Pluto.
In Jerred's dual-mode drive, a modified RTG provides power for both drive modes. For the thermal drive, reaction mass in the thermal propulsion propellant tanks is fed through the RTG, therein being heated to about 850° C (1,500° F). This is more than enough to gassify the propellant and generate a high pressure, after which it expands through the nozzle to produce thrust.
Without more engineering information it is very difficult to evaluate how much thrust, but it should be on the order of one Newton (about 1/4 lb). The specific impulse should be in the neighborhood of 300 sec, similar to that of chemical fuels. Running the RTG at a higher temperature is a possibility, which would result in larger specific impulse, but higher temperatures put a strain on the Stirling RTG components.
The second mode of propulsion is the ion drive. In this case, the RTG operates to produce electricity for an ion drive. An RTG that provides 1 kW of heat and about 300 W of electrical power would require about 2 kg (4.4 lb) of Pu-238, the most common isotope used in RTGs. But Pu-238 has a half-life of almost 90 years, which is overkill for, say, a mission to an asteroid or to Mars. If two to three months of propulsive power would be enough for a mission, polonium 210 could be used. It has a half-life of 138 days, and only about 15 g (0.5 oz) would be required to produce 1 kW of heat.
The sample mission for the proposed study is to send a 10 kg (22 lb) payload to Europa. The Phase I study will provide engineering analysis of the major components and look at performance-related compromises which will help determine the feasibility of such a mission. Positive results may lead to an 18 month study that examines the sample mission in more depth.

John Bradford's Torpor to Mars Missions

One problem with space flight at our current state of advancement is that it takes too long. Flight times within the solar system are measured in months or years, during which time astronauts would generally have very little to do, but continue to consume full helpings of food, water, oxygen, and power. Psychologists also suggest that the interminable boredom of long-duration space flights may present substantial difficulties for a crew.
Science fiction has often resorted to inducing suspended animation to avoid these dull periods. The problem with looking to suspended animation for a solution is that humans seem to lack the ability to safely achieve significant levels of hibernation or torpor. Despite this, the profound medical applications that reduced metabolism states could offer have stimulated considerable research on just what causes hibernation, how it differs from torpor, and how it might be induced in mammals without natural access to these states. John Bradford has convinced NASA that it is time to take a serious look for applications in space travel.
A stasis habitat, in which astronauts will gently enter a state of torpor, may help accomp...
The idea isn't to freeze people and thaw/resuscitate people at Mars, or to induce hibernation. True hibernation occurs when an animal allows it's heart rate to drop precipitously, and its body temperature to drop to a few degrees above ambient.
A better model is likely to be a wintering bear. Bears do slow their heart rate to as low as 10 beats per minute (it's normally about 40 when asleep), but only drop their body temperature by about 5° C (9° F). Their long winter sleep is more often called torpor, or winter lethargy. This is the general pattern among the larger mammals which "hibernate." By analogy, people are expected to enter more easily into extended torpor than into true hibernation.
Considerable experimentation has been done in search of triggers for torpor. In the area of drug-like triggers, one study showed rather conclusively that small quantities of hydrogen sulfide in the air would induce a hibernation-like state. Their body temperature fell to about 2° C (4° F) above ambient, and their breathing rate fell by more than 90 percent. Their blood pressure, however, remained high. Researchers have also been able to induce torpor in pigs for several hours without apparent damage.
John Bradford is taking what is known about torpor to perform a "what-if" study. His project will design a torpor module for astronauts on a slow boat to Mars, and will compare the supply and mission requirements for a range of conventional technology assumptions for such missions. This Phase I analysis is only intended to investigate the compatibility of a torpor module with inner solar system voyages. Later in the program, if renewed, he will study how to accomplish the goal of induction and maintenance of torpor in a crew of astronauts.

Ring Weeder

Removing weeds can be annoying, especially in an area with a lot of plants. Ring Weeder slips over the user's index finger and allows for precision weed pulling all the way down to the root.
When taking a hands-on approach to weeding, the challenge is to make sure that the pesky invader is pulled all the way out, root intact. If it's not a clean extraction, there's a very good chance that the weed will just grow back and you'll have to try again.
Ring Weeder is worn like a ring over the gardening glove, and has a forked end that the gardener sticks in the ground behind the weed. The offending plant and root are then removed with a smooth dig and lift motion. It's a simple tool, but one that could prove to be a time saver for anyone who does a lot of gardening.
Vincent Suozzi, the creator of Ring Weeder, is seeking funding on Kickstarter. It has already more than doubled its modest funding goal with almost two weeks of the campaign left to run. Early bird pledge levels have all gone, so backers will now need to offer at least US$10 for a single Ring Weeder.
The Kickstarter pitch below provides more information on the Ring Weeder.

Gaming Lapy

When it comes to gaming laptops, the era of two-inch-thick, weighty monstrosities is truly over. Systems such as Razer's Blade and Blade Pro have carved out a decidedly more pleasing form-factor for the category, and with the GS70, MSI is ready to stake its claim at the top of the market. The new system is particularly thin for its category and packs some high-end hardware within its svelte body.
The GS70 is aimed firmly at the top end of the market. It starts at US$1799.99, comes in at 0.85-inches (2.15 cm) thick and weighs 5.73 lb (2.6 kg). Running Windows 8, the system boasts some impressive internals including an Intel Haswell Core i7-4700HQ processor, an NVIDIA GeForce GTX 765M 2 GB GPU and 16 GB of DDR3L 1600MHz RAM.
The system is just 0.85 inches thick
The GS70's 17.3-inch anti-reflective display comes in at a full 1920 x 1080 resolution and it's possible to output to three displays at up to 4K resolution through the built-in HDMI and Mini DisplayPort. In terms of connectivity, there are four USB 3.0 ports, three audio jacks and a 720p webcam. There are also Killer E2200 Game Networking and Killer N1202 2x2(a/b/g/n) cards on board.
The system's SteelSeries keyboard features anti-ghosting technology and color backlighting and the laptop is fitted with a six-cell 120w battery, though there's no word on how long it will run on a single charge.
MSI GS70 front view
The GS70 also employs a dual fan thermal solution to keep the machine cool. The system pulls heat from the top of the laptop, dissipating it at a 45-degree upward angle, a technique that MSI claims will guarantee a cool gaming experience.
There are two versions of the GS70 available, coming in at $1799.99 and $1999.99. The lower cost system features a 128 GB SSD hard drive and 750 GB HDD, while its more expensive cousin comes with a 128 GB SSD RAID and 1TB HDD configuration.
In terms of competition, the GS70 looks fairly well placed. It matches the Blade Pro's specs while offering more RAM, and while it might lack its rival's LCD trackpad feature, its prices do start $500 lower than Razer's machine.

Special Technique For Turning Sunshine In To Power

A new technique developed by a University of Colorado Boulder team converts sunshine and water directly into usable fuel. The technique involves concentrating sunlight in a solar tower to achieve temperatures high enough to drive chemical reactions that split water into its constituent oxygen and hydrogen molecules. In this way, the team says it should be able to cheaply produce massive amounts of hydrogen fuel.
The team's solar thermal system concentrates sunlight off a vast array of mirrors into a single point at the top of a tall tower to produce very high temperatures. When this heat is delivered into a reactor full of metal oxides, the oxides heat up and release oxygen. The reduced metal oxide now gains a chemical composition that makes it ready to bind with oxygen atoms. Introducing steam into the reactor, which can also be produced by heating water with sunlight, causes the compound to draw oxygen atoms out of the water molecules, leaving behind hydrogen molecules that can be collected as hydrogen gas.
While the concept of using an array of mirrors to concentrate sunlight into a single point at the top of a tall tower is nothing new, being the same technique used in solar thermal tower power plants, there are certain key differences here. Typically, sunlight is concentrated about 500 to 800 times in standard solar power tower designs to reach temperatures of about 500Âş C (932 Âş F) and produce steam that drives a turbine to generate electricity. However, splitting water requires temperatures of around 1,350Âş C (2,500Âş F), which is hot enough to melt steel.
"You need this high temperature both to give you the driving force to drive the chemical reactions and also the kinetics to make the reactions go fast enough to make the process practical," says Charles Musgrave, Professor of Chemical and Biological engineering at CU-Boulder.
To get those kinds of temperatures, the team added additional mirrors within the tower to further concentrate the sunlight onto the reactor and the active material. While it isn't too different in principle from using a magnifying glass to focus sunlight onto a piece of paper to get it to burn, this setup allows the reflected sunlight to be concentrated by up to 2,000 times. "We are trying to use sunlight to drive chemical reactions that require higher temperatures than combustion," says Musgrave.
The big breakthrough came about when the team discovered certain active materials that allowed both these chemical reactions (reducing the metal oxide and re-oxidizing it with steam) to occur at the same temperature.
Though there aren't any working models, conventional theory dictates that a change in temperature is necessary to make the two different reactions occur – a high temperature for reducing the oxide and a low temperature for re-oxidation. Instead, the introduction or the absence of steam is used to drive the different reactions and certain unique properties of the metal oxide compounds used makes this possible.
"We determined that both reactions could be driven at the same temperature of about 2,500° F (1,371° C)," Musgrave told us. "Even though we run at a constant and lower temperature we still generate more hydrogen than competing processes."
Alan Weimer, the research group leader at CU-Boulder says that eliminating the time and energy required for temperature swings lets them make more hydrogen in a given amount of time. To produce even more hydrogen fuel they'd only need to increase the amount of material in the reactor. "In many respects, our approach is out of the box where prior work was inside of the box using the temperature swing," he adds.
According to the team, huge solar plants spread across many acres could produce much more fuel per acre than biofuels for the same amount of acreage. Another advantage that this process has over other renewable technologies, such as wind and photovoltaics, is that it directs sunlight to directly drive chemical reactions to produce fuel for use in combustion engines or fuel cells. In contrast, photovoltaic processes first convert sunlight into electricity, reducing overall efficiency.
"Our objective is to produce hydrogen (H2) at $2/kg H2," Weimer tells Gizmag. "This is equivalent to about US$2/gallon (3.7 L) of gasoline based on mileage in a fuel cell car versus a combustion engine today." With the aid of a solar thermal plant, the team believes that on a land area of about 48,500 ha (120,000 acres) they can generate 100,000 kg (222,460 lb) of hydrogen per day, which is enough to run over 5,000 hydrogen-fuel cell buses daily.
Though the technology has the potential to be a game-changer in pushing the hydrogen economy forward, commercialization might still be several years away thanks to continuing stiff competition from fossil fuels.

Neuromorphic Chips Used For Reverse engineering

Researchers at the University of Zurich and ETH Zurich have designed a sophisticated computer system that is comparable in size, speed and energy consumption to the human brain. Based on the development of neuromorphic microchips that mimic the properties of biological neurons, the research is seen as an important step in understanding how the human brain processes information and opens the door to fast, extremely low-power electronic systems that can assimilate sensory input and perform user-defined tasks in real time.

Neuromorphic engineering

Layout of a multi-neuron chip comprising an array of analog/digital silicon neurons and sy...
The human brain is a remarkable machine: with a power consumption of only about 20 W, it can outclass the fastest supercomputer in most real-world tasks – particularly those involving the processing of sensory input. Researchers believe that the brain's astounding abilities aren't down to mere processing speed, but rather to the highly efficient way in which it elaborates information.
Though we lack the tools to fully investigate the brain's "computing architecture," we know that unlike your standard CPU the brain uses a mixture of analog and digital signals at the same time; that information is processed on a massively parallel scale at relatively slow speeds; that memory and instruction signals are often seamlessly combined; and that continuous adaptation and self-organization of its neural networks play a crucial part in its function.
Established in the late 1980s, neuromorphic engineering is an interdisciplinary amalgam of neuroscience, biology, computer science and a number of other fields that attempts first to understand how the brain manipulates information, and then to replicate the same processes on a computer chip. The goal is the development of new, powerful computing architectures that could be used to model the brain and, perhaps, even serve as a stepping stone to a sophisticated, human-like artificial intelligence.
Most attempts at replicating a human brain involve simulating a very large number of neurons on a supercomputer; the neuromorphic approach, however, is quite different because it involves developing custom electronic circuits that simulate the neuron firing mechanisms in the actual brain and are similar to the brain in terms of size, speed and energy consumption.
"The neurons implemented with our approach have programmable time constants," Prof. Giacomo Indiveri, who led the research efforts, told Gizmag. "They can go as slow as real neurons or they can go significantly faster (e.g. >1000 times), but we slow them down to realistic time scales to be able to have systems that can interact with the environment and the user efficiently."
The silicon neurons, Indiveri told us, are comparable in size to actual neurons and they consume very little power. Compared to the supercomputer approach, their system consumes approximately 200,000 times less energy – only a few picojoules per spike.
A neuromorphic chip uses its most basic components in a radically different way than your standard CPU. Transistors, which are normally used as an on/off switch, here can also be used as an analog dial. The end result is that neuromorphic chips require far fewer transistors than the standard, all-digital approach. Neuromorphic chips also implement mechanisms that can easily modify synapses as data is processed, simulating the brain's neuroplasticity.

Soft state machines

The neuromorphic chips are subjected to a visual cognitive test (Image: ETH Zurich)
Promising as they may be, neuromorphic neurons have proven difficult to organize in cooperative networks to perform a user-defined task. The Zurich researchers have now solved this problem by developing a sort of elementary structure – what they called a "soft state machine" (SSM) – that can be used to describe and implement complex behaviors in a neuromorphic system.
In computer science, a finite state machine (FSM) is a mathematical model similar to a flowchart that can be used to design computer programs and logic circuits. FSMs can implement context-dependent decision-making, "if-A-then-do-B" clauses, and use a short-term memory of sorts.
SSMs are neuronal state machines similar to FSMs that combine analog and digital signal processing. As such, they can be used to describe a complex behavior in a neuromorphic chip. The behavior can be first described in terms of a standard finite state machine, and then automatically translated into a SSM that can be implemented on a neuromorphic chip.

A smarter silicon retina

The researchers tested their findings on an advanced electronic camera known as silicon retina with a visual-processing-based task inspired by those used to evaluate the cognitive abilities of human subjects.
"The subject (our neuromorphic system in our case) is presented with a cue at the beginning of the experiment which specifies the rule to use for the task," Indiveri explained. "The subject is required to look at a screen in which a horizontal bar and a vertical bar are moving, and depending on the initial cue, the subject is supposed to report if and when a vertical bar crosses the middle of the screen from left to right, or if a horizontal bar crosses it from right to left."
Aside from real-time visual processing, the task also requires memory and context-dependent decision making, elements that are commonly accepted as signs of cognition. Interestingly, the neural structures that form as this visual test is performed has shown a remarkable similarity with neural structures in the mammalian brain.
"The recurrent neural circuits implemented in the system have the same type of connectivity patterns found in the visual cortex of the cat," says Indiveri. "In particular, they implement soft winner-take-all circuits that are based on descriptions of canonical microcircuits found in the visual cortex."

Applications

This work sheds light on how the neural networks in the brain implement the higher cognitive functions, and offers some valuable insights as to how future neuromorphic chips could go about increasing performance even further.
"One of the goals of our work, and neuromorphic engineering in general, is to use this technology as a medium for understanding the principles that underlie neural computation. So my hope is that our work can contribute to the task of reverse engineering the way a brain works," says Indiveri.
In the more immediate future, the researchers will combine the chips with several sensory components at once, such as an artificial cochlea or retina, to create complex cognitive systems that interact with their surroundings on multiple levels, all in real time.

Sunday, 4 August 2013

Bluetooth Technology Next Generation

Tooth fillings acting as radio receivers may be nothing more than a myth, but scientists at the National Taiwan University are developing an artificial tooth that would send rather than receive transmissions. They’re working on embedding a sensor in a tooth to keep an eye on oral goings on, along with a Bluetooth transmitter to transmit the data and tell your doctor what your mouth's been up to.
Our mouth is our most multi-purpose orifice. We breathe with it, we taste with it, we use it for eating, for talking, for expressing emotions, for making love and even foolishly trying to open the occasional beer bottle. But scientists think it's also an untapped resource for monitoring people’s health. With this in mind, National Taiwan University researchers reasoned that if they could hook up the mouth with some sensors, it could help to better understand people’s habits and identify potential health problems, such as if a person is smoking or drinking too much.
The current proof of concept prototype uses a wire instead of a Bluetooth transmitter
The tooth sensor is a first step in this direction. Designed to fit into an artificial tooth, it includes a tri-axial accelerometer that monitors mouth movements to figure out when the patient is chewing, drinking, speaking, or coughing, with the readings transmitted to a smartphone via Bluetooth.
Currently, the scientists are still at the proof of concept stage, so their first design dispensed with the James Bond-style artificial tooth embedded with a radio transmitter in favor of a small breakout board that’s been coated with dental resin. This makes it saliva-proof and able to be anchored to the subject’s dental work with dental cement while the transmitter’s job is done by a wire running out of the mouth. This may seem a bit low tech, but it does prevent the subject from swallowing the device if it comes loose.
Eight subjects, five men and three women, had a sensor installed and were then asked to carry out a series of tasks, such as coughing, chewing gum, drinking water or reading out loud. According to the team, the sensor was able to correctly identify the particular oral activity with a 93.8 percent success rate when it combined the data from all eight subjects, with 59.8 percent accuracy rate when using the data from seven subjects to figure out what the eighth was doing.
The team is now working on the next prototype, which will transmit wirelessly and be powered by a rechargeable battery. After that, they will improve the system’s accuracy and address safety issues.

Saturday, 3 August 2013

TRASH AMPS JAM

We’ve seen big glass speakers and we’ve seen smaller models, but Trash Amps’ Jam takes the whole glass speaker thing down to a new level – it’s a speaker and amplifier, housed in a Mason jar.
Electronics do-it-yourselfers have been making glass jar speakers for a while now, perhaps most notably Sarah Pease with her audioJar. Like some of those DIY efforts, the Trash Amps Jam has its guts attached to the underside of the lid, with holes in that lid acting as a grille. In the case of the Jam, however, the jar’s tin lid has been replaced with more acoustically-friendly Baltic birch plywood.
An included curly cord with 3.5-mm plugs at either end allows users to play music from their mobile device through the Jam. An included adapter plug also allows them to use it with an electric guitar – an input switch lets them choose between MP3 and guitar line-in levels.
An input switch lets users choose between MP3 and guitar line-in levels


Little in the way of specs are available, although the device does apparently run for about 20 hours on one charge of its integrated battery.
Should you be nervous about breaking its glass body, Trash Amps also makes a speaker/amplifier that’s housed in a beverage can.
The Trash Amps Jam is currently available for about US$70, via the link below.


Motorola Next Generation Mobile Device

Google-owned Motorola is going after the middle of the smartphone market in a big way with its new Android flagship, the Moto X. At the heart of the device is Motorola's "X8" chipset, made up of a dual core Qualcomm S4 CPU, a quad-core Adreno GPU and two more cores that the company calls its "contextual core."
While it may sound nerdy, Moto X's ability to understand its role in the world around it at any given time is something that Motorola hopes will be a major selling point to consumers who might otherwise opt for an iPhone or the latest name brand Android handset from the likes of Samsung.
A key part of Moto X's hyper-awareness is its ability to constantly listen for its owner to say "OK, Google Now," which wakes up the phone and activates the Google Now voice-activated personal assistant.
The Moto X is loaded with sensors to allow it to be constantly aware of its environment
Gizmag was on hand at the official unveiling of the Moto X in New York this week and I was given a walk-through of the phone's "touchless" features and overall contextual awareness. Watch the video below from that event to see a demonstration by a Motorola rep, as well as the low-down on the customization options for the Moto X

New Trainer To Help Toilet Train

A new toilet-training device developed by researchers at the University of Rochester combines a wearable sensor pad, Bluetooth technology, an iOS device and accompanying app to help toilet train intellectually disabled children. Rather than just providing entertainment like the iPotty, the Quick Trainer issues an alert the moment the child starts to pee, so adults can take them to the toilet and encourage them to use it. If all goes well, they are rewarded with treats to encourage them to head to the toilet the next time the need arises.
Similar to the Huggies TweetPee concept, the device features a disposable sensor resembling a panty liner that fits into the child's underwear, and a Bluetooth transmitter that snugly snaps onto the sensor. The sensor pad is made of soft fabric that is embedded with conductive thread that forms a circuit when exposed to moisture, while the Bluetooth module is battery powered and reusable. So when the child has an accident, a circuit is formed and the Bluetooth module sends a message to the parent's iOS device which sounds an alert and records the incident in a log.
Taking care of the child at that point becomes a simple four step process. The parent or caregiver lets the child know that it's potty time and takes them there. Then there's getting them to sit on the potty and encouraging them to go if they still need to. Next comes a five minute wait during which they hopefully do their business.
If the child does do the deed, they get a reward through a personalized picture-based reward menu on the iOS device (in addition to effusive praise). These rewards can include the playing of a favorite video, YouTube clip, game, song or even the option to choose a picture of a snack they'd like to receive. If the child has already done their business before making it to the toilet, they are thanked and reminded they'll get a reward next time and the sensor pad is removed and replaced with a fresh one.
iPod with the disposable sensor resembling a panty liner that fits into the child's underw...
According to the Rochester University researchers who developed the Quick Trainer, children who've been wearing disposable underwear for years were toilet trained in 45 days or less using the device. That's good news for parents of children with intellectual disabilities, autism and Down Syndrome for whom the toilet training process can be a nightmare.
"One study suggests that it takes about a year-and-a-half to train children with autism, and many do not use the toilet independently even through their school age years and beyond," Daniel Mruzek, Associate Professor of Pediatrics, University of Rochester Medical Center, told Gizmag.
Most parents and teachers tackle the problem by scheduling trips to the toilet, rewarding kids when they do go potty but they are stopgap solutions at best. "Consider two to three 10-minute diaper or pull-up changes during each school day across entire school years," Mruzek says. A watchful eye doesn't help either since these kids often don't do a potty dance or display any outward signs when they need to go. Fear of potty accidents is a quality-of-life issue for both parents and children that severely affects all their daily activities.
Mruzek, along with bioengineer Stephen McAleavey, set out to create a wireless wearable sensor system to tackle the problem. Earlier versions of the Quick Trainer featured bulkier components, which the team scaled down to a two part system consisting of a sensor/transmitter combo to fit in a child's undergarments and a receiver/pager unit consisting of an iOS device and a potty training app, to be carried with the parent or placed nearby.
McAleavey states that the device can operate at a range of 150 ft (45 m) outdoors and at least 30 ft (9 m) indoors, through walls and doors. It can also monitor several children at once and records the date and time of the accident for follow-up analysis, with parents able to send the data to a clinician via email. Additionally, they can manually log and email their child's successful trips to the bathroom.
Initial results show a lot of promise. An 11 year-old female child with severe intellectual disability began using the toilet without accidents after 40 days and a 15 year old boy with the same condition needed only 26 days. Both had histories of repeated, unsuccessful training attempts and were using disposable pull-ups before using the Quick Trainer.
Being trained with the Quick Trainer doesn't mean having to use it forever. According to the team, even children showing no outward signs of wanting to potty begin to develop clear signals such as rocking, pacing, vocalizing and grabbing the iOS device when they need to go. Parents can use these behaviors to initiate potty trips, gradually reducing their child's dependence on the device until they don't need it any longer.
After larger clinical trials, the team plans to develop the technology further to assist individuals with other types of disabilities as well as the elderly receiving care to help them become as independent as possible. Initially funded through the crowd funding site, Innovocracy, the project is currently being supported by the Autism Treatment Network.

Lamborghini Most Extreme Car

Lamborghini has announced that the latest model in its Gallardo line-up will make its world premiere at the 2013 Frankfurt Motor Show. Based on Lamborghini’s Super Trofeo track cars, the new LP 570-4 Squadra Corse sports a 570 hp, V10 engine that will launch it from 0 to 100 km/h (62 mph) in a prompt 3.4 seconds before hitting a top speed of 320 km/h (198 mph).
The Italian manufacturer with a penchant for angular carbon fiber describes this latest model as the most extreme yet in the Gallardo line-up. In essence, it's a track racer with street going personality traits.
To break down the car's moniker, 570 refers to the 5.7 liter engine while the 4 refers to the all-wheel drive setup. Squadra Corse refers to the recently-founded division within Automobili Lamborghini that manages all of the company’s motorsport activities.
The new Squadra Corse comes standard with carbon ceramic brakes, Lamborghini’s 6-speed paddle driven transmission and, like the Trofeo series model, carries with it a rear wing capable of generating three times the downforce of the Gallardo LP 560-4. A removable engine cover, with a quick release system is also a carryover from the race versions. Both the cover and rear wing are made from carbon composites to reduce weight.
Speaking of weight loss programs, the new Gallardo tips the scales at a svelte 1340 kg (2954 lb), 70 kg (154 lb) lighter than the LP 560-4. Carbon fiber and aluminum architecture is used throughout, resulting in a stiffer, lighter chassis.
Alcantara and carbon fiber mix beautifully throughout the Squadra Corse's interior
Aesthetically, beyond the big honking rear wing, there’s the usual array of limited edition add-ons. Door panels, racing seats, center console cover, part of the steering wheel and other various bits are composed of carbon fiber with a dash of Alcantara used throughout to soften the harsh, industrial feel. Buyers also have the option of replacing the Squadra Corse’s racing seats with standard seats should they wish.
The LP 570-4 Squadra Corse will make its world debut at the 2013 Frankfurt Motor next month where Gizmag will be on hand for a closer look. The Squadra Corse will be available in Giallo Midas yellow, Bianco Monocerus white, Grigio Thalasso grey and of course, Rosso Mars red. Pricing is to be revealed later this year.

New World Of Photography Using Panorama method

Photography group 360Cities seems determined to capture every major city in the world in as much detail as possible. Shortly after putting together a 360-degree panorama of London and breaking the record for world's largest photo in the process, the group's founder Jeffrey Martin set his sights on Tokyo for his next project. This latest panorama may not trump his old record, but at 180 gigapixels, it's still the second largest photo ever taken.
Back in September of 2012, Martin spent two days on the roof of the Tokyo Tower's lower observation deck to shoot the 10,000 individual images that would eventually form the completed panorama. Each photo was shot with a Canon 7D digital SLR fitted with a Canon 400-mm f5.6 L lens. The camera was mounted to a Clauss Rodeon VR Head ST robotic panorama rig, moved to three spots around the tower, and programmed to automatically capture the entire vantage point.
As you might expect, the level of detail seen in the panorama is impeccable
Fujitsu Technology Solutions sponsored the project and provided the Celsius R920 workstation that pieced together the final panorama into an image that viewers can explore by panning and zooming in on the scenery. Even with 192 GB of RAM and a 12-core processor, the computer needed 12 weeks to process the image, plus some extra time to convert it into an interactive panorama for online viewing.
It may fall well short of breaking the record for the world's largest photo, which clocked in at a mammoth 320 gigapixels, but this is still the largest photo of Tokyo ever made. The full image measures 600,000 x 300,000 pixels, which would produce a photo stretching 100 m (328 ft) wide and 50 m (164 ft) tall if it were printed at a normal photographic resolution. From the camera's viewpoint of 20 stories high, it's possible to spot specific structures and landmarks up to 30 km (18 miles) away, including the city's tallest building, the Tokyo Skytree.
Each photo was shot with a Canon 7D digital SLR fitted with a Canon 400mm f5.6 L lens
The level of detail seen in the panorama is remarkable. Zooming into some areas, you can easily make out an individual person's face, read license plates, and even peek inside some shop windows. There are a few stray glitches here and there (lighting that shifts unnaturally, buildings merged with plants, duplicated people and cars, etc.), but they don't detract from the stunning snapshot of the city laid out before you.
If you want to explore the city of Tokyo for yourself, head over to 360Cities website to view the full 360-degree panorama in your browser.

Kinect Is Available Get Ready For 3D Print

Scanning and 3D printing an object could become much simpler if 3D printing company Volumental is successful in crowdfunding the development of a web app which would allow users to scan and print 3D objects using nothing more than a Kinect sensor and a web browser.
Though the company already has a web service that allows people to upload scanned 3D models, Volumental says that it needs to refine an app which is better able to differentiate a thing (toys, pets, family members are among the suggestions) from its surroundings in order to be able to print the object in isolation. Though this is a tricky problem to solve, the company claims it knows how to do it, and simply needs to hire a developer to get it done.
If funded, the app raises the exciting prospect of being able to scan more or less anything. Connect your Kinect sensor to a laptop tethered to a smartphone and you theoretically have yourself a portable 3D scanner with which to snap a quick model of anything you fancy a 3D print of, which would arrive soon after on your doorstep. The team claims the process will be as easy as streaming a movie using Netflix.
Why not scan and print your family?
Volumental is aiming to develop the app inside of three months. Though the delivery date for pledges is January 2014, the company says this represents a "worse case scenario."
Though that sounds ambitious, Volumental is not a beginner in the field of 3D scanning. The company has grown out of Kinect@Home, a web project that allows Kinect owners to upload 3D scans onto the web developed by Stockholm Royal Institute of Technology Computer Science PhD Alper Aydemir and PhD students Rasmus Göransson and Miroslav Kobetski.
The available pledges get interesting at the US$50 mark, which will net you a 3D print of any model you scan and upload. Other pledges make 3D models available to downloading for people with access to 3D printers. Some also throw in a depth camera for people that don't already have one. This may not be a Kinect, as any OpenNI-standard depth camera should work.

Android Next Official Find My phone Service To Android

The Google Play store is full of third-party apps that will let you find a lost or stolen phone. Until now, though, there wasn't an official solution from Google. Well, apparently Larry Page and company decided it was about time, and we'll soon see the fruits of that, in the form of the Android Device Manager.
Set to launch later this month, the Android Device Manager (ADM) will offer a handful of tools to help you track down your prized Galaxy S4, Moto X, or HTC One (or any other Android phone running 2.2 Froyo or above).

What it does

If you lose your Nexus 4 at Google Headquarters, you can easily find it on a map (Original...
If your phone is lost somewhere at home, wedged inconspicuously between couch cushions perhaps, you can have your phone automatically ring on maximum volume. Why not just call it, you say? Well, that's another option you can do today, but if your phone is silenced or set to ring on low volume, you might not hear it. ADM will turn that ring up to 11, no matter what it's set to.
If the situation is a bit more serious, and your phone has been lost or stolen farther from home, then you'll be able to see its real-time location on a map. If it looks like some unsavory character has taken your phone, you can remotely wipe all your data.
It looks like Android Device Manager will be downloadable from Google Play. Google says that you'll need to be signed in to your Google account to use it, and there will also be an Android app to help you track it down if you lose it.
These are hardly revolutionary new features: iPhones and iPads have done this for ages, and, as we said, there are already third-party services on Android. But it's nice to have an official solution from Mountain View.

Panasonic's Lumix DMC -GX7 MirrorLess Camera

Panasonic has revealed its latest retro-styled mirrorless interchangeable lens camera, the Lumix DMC-GX7. As well as making some detail and color saturation improvements to its newly-redesigned Live MOS sensor, the company has also treated the new addition to built-in Wi-Fi and NFC capabilities, in-body image stabilization, and a silent shooting mode. Stealing the show, however, are the Live Viewfinder and the rear display panel – both of which tilt.
The Lumix GX7 high end Digital Single Lens Mirrorless (DSLM) camera sports a camcorder-like tilting electronic viewfinder with 90 degrees of adjustment, allowing photographers to look down into the 2.76 million dot resolution Live Viewfinder (LVF) Brownie-style, or with a cheek firmly pressed against the display panel like a compact ... or anywhere inbetween

It's reported to have 100 percent color reproduction, and benefits from a built-in sensor that turns the LVF on or off depending on how the photographer uses the camera, and auto switches the displayed image between viewfinder and rear display.
That display is a 3-inch, 1.04 million dot resolution touch panel that also tilts up and down, to raise the camera above the crowd or get close to the ground, and still allow the photographer to frame the shot. The touch sensitivity of the rear monitor can be adjusted to minimize the chance of false operation while using the LVF.
By attaching the monitor's front panel directly to the In Cell Touch LCD, Panasonic says that bothersome reflection has been significantly reduced, and Touch AF lets users set the focus by touching the desired subject on the panel. Fingertip zoom is also possible, as well as touch shutter control.
Moving inside the GX7's magnesium alloy die-cast frame, Panasonic has included a newly-developed 16-megapixel (17.3 x 13 mm) Live MOS sensor that's reported to deliver better color saturation, higher sensitivity and less noise than its predecessor, the DMC-GX1 launched late in 2011. When the camera's Venus imaging engine is added into the mix, the GX7 offers improved reproduction of the finer image detail and a low-light-friendly sensitivity of up to ISO25600. The camera also boasts a fast 1/8000 shutter speed to help ensure sharp capture of moving subjects.

In common with the rest of the Lumix G cameras from Panasonic, the GX7 uses a contrast autofocus system for fast and accurate performance. To help keep focus wait to a minimum, this model is capable of a digital signal exchange between camera and lens of 240 frames per second. For precision shooting, there's a focus peaking function that highlights areas in the scene that are in focus, and picture-in-picture display for precision focusing on a subject while also showing the whole composition.
There's no need to worry about getting the shakes when attaching classic lenses onto the GX7 with an adapter fitted to the Micro Four Thirds lens mount, as optical image stabilization (that's claimed to be almost as effective as the Mega OIS system used in Panasonic's DSLM lenses) has been incorporated in the camera body. The GX7 benefits from a silent mode that switches the shutter from mechanical to electronic, while also shutting off the sound, AF assist lamp and integrated flash. It's also capable of five frames per second burst shooting at full resolution with a mechanical shutter, or up to 40 fps from an electronic shutter.
Not only has built-in 802.11b/g/n Wi-Fi connectivity been included, but the camera also sports NFC technology for easy, instant connection to smartphones or tablets. Remote viewing and shooting is possible via the Panasonic Image App for iOS and Android.

To the right of the touch panel are button and dial controls for more traditional menu and control access, and there's twin dial control over aperture, shutter speed or exposure. Creative photography options include Creative Panorama, Time Lapse, and Stop Motion. Creative Control mode boasts 22 filter effects, including rough or silky monochrome, soft focus and the ever-popular sepia. Clear Retouch allows users to remove unwanted parts of the image after the shot has been taken.
Rounding off quite an impressive set of specs is Full HD video recording at up to 60 fps in AVCHD Progressive or MP4, with stereo microphones (with a useful wind cut feature) grabbing the audio. Autofocus, Touch AF and Tracking AF can all be used while shooting video.
The Lumix GX7 has been given an estimated ship date of October and comes in black or silver for a body-only suggested retail of US$999.99. Lens kit options will also be available.
 The 3-inch, 1.04 million dot resolution touch panel tilts up, to raise the camera above th...

Make a Youtube Video Playlist

 

 

 

When you make a group of videos and upload them to Youtube, it is always a good idea to create a Youtube video playlist. This will make it easy for people to view one video and then move onto the next as they will be grouped together. For example I have a Youtube channel called Computer basics and I upload videos about computer tips. There are just under 300 videos on my channel and making a Youtube video playlist is the best way to group this many videos into sections.

How to create a Youtube Video Playlist

1.  Sign into your Youtube account.
2.  Click on the drop-down Arrow beside your username.
3.  A menu will appear. Choose videos from the menu.
Youtube account
4. You will see all of your uploaded videos, click on the arrow on the new button. See the screen shot below.
Youtube video playlist
5. Now you can name your new playlist.

How to add a video to a Youtube video playlist

1.  When you are viewing your uploaded videos there will be an Add to button.Youtube video
2.  Make sure you have selected the video you want to add by ticking the box for it.
3.  Click on the arrow next to the Add button to see the drop down menu
4.  Choose Playlist.
5.  Now you can see the Youtube video playlists. Choose which one you want to add the video to.
Youtube video playlist
See more Youtube video tips.

Here is a video about creating a Youtube video playlist

Why create a Youtube video playlist?

Creating a Youtube video playlist definitely encourages more views on your videos. It will also help people find related videos from your Youtube channel instead of having to surf around Youtube and find it from other people.
A Youtube video playlist can also get you ranking in Youtube search results for different keywords. I have noticed that playlists show up before a single, lonely video will. That is just another great reason to create a Youtube Video Playlist.