Researchers at MIT and elsewhere developed a tiny origami robot that can unfold itself from a swallowed capsule and, steered by external magnetic fields, crawl across the stomach wall to remove a swallowed button battery or patch a wound.
You can find more inoformation on the MIT site: MIT
Quin teaches Arduino , a system for minicomputers that combine with open source software , easy to program , designed for people with no advanced engineering expertise to design all kinds of devices.
One objective is to revolutionize education and advocates the introduction of methods in schools to help children learn electronics programming.
" A special gift "
His parents, who have no training in engineering , which have proved very young Quin have a special gift .
According to his mother , from the beginning realized that his son had engineering skills , because I loved align all kinds of things and do puzzles. In only three years, and solve mathematical problems.
With Arduino modules , Quin has managed all kinds of devices, from robotic cleaner cap called a " gas cap " , an invention that has more to do with something you usually interest a child his age : farts .
What made Quin is to incorporate a methane sensor in a small device able to connect wirelessly to a cap full of lights .
The device goes into the pocket and when it carries a gas flops, methane sensors detect it and signal to the cap that makes the lights come on . A higher concentration of methane have higher intensity lights .
Interactive technology.
A year ago Quin founded his first company , Qtechknow dedicated to distributing welding equipment for beginners and intermediate interface boards for users.
It also offers a full to learn programming on the Arduino platform team .
Thanks to his passion for Arduino, now dedicated to teaching other children and adults in classes such as the MIT Club .
In five years, Quin wants to pursue his career at MIT to one day become an educator and an electronics engineer .
A 110-core CPU chip has been developed by computer scientists at the Massachusetts Institute of Technology. The chip is based on a new architecture in which instead of bringing data across the chip to the core that happens to want it, you move the program to the core where the data is stored. In practice, this new architecture reduces the amount of on-chip data exchange tenfold, along with the heat and infrastructure demanded by conventional chip architecture.
Scientist from MIT and Harvard accidentaly create the Stars War laser sword possibility.
George Lucas taught that a laser beam can create powerful weapons, but certainly the legacy of Luke is the lightsaber used by the Jedi in the famous film Star Wars saga ; accessory that every geek and fan of these films science fiction would have , even in toy.
But perhaps today to fulfill that dream , thanks to some scientists created unwittingly (by mistake) a lightsaber .
According to ABC the newspaper , researchers at MIT and Harvard have announced a technology development that could theoretically be used to build a real laser sword .
A study published by the Harvard Gazette, mentioned that the Center for Ultracold Atoms ( CUA ) with the collaboration of scientists from Massachusetts Institute of Technology (MIT ) and Harvard University , has managed , against all odds, coaxiar photons within harder molecules so that the beating between them could , in a match that could be simulated like in the Star Wars film saga , amputating the hand of one of the contenders.
" It is not inappropriate at all compare this technology with laser - swords explains Professor of Physics at Harvard MikhailLukin . When these photons interact with each other , are pushing and blocking the other.
Physical reactions what is happening between these molecules is similar to what can be seen in the films of the famous film franchise " , says the researcher . "In this experiment , we used beams of light and have faced from six different positions to finally get that light rays managed to cool the atoms " ; action for researchers has been a dream for the past several years , and today have achieved.
In any case, it is expected that in the short term, this new technology has resulted in an " elegant weapon for a more civilized age of Humanity ," as Obi - Wan Kenobe said.
Video: Folding electric micro car demo - Armadillo-T by KAIST.
KAIST, the Korean Advanced Institute of Science and Technology who revealed a folding electric vehicle prototype like the Hiriko designed by MIT and currently developed by a European consortium. NamedArmadillo-Tthe micro-vehicle receives four in-wheel motors and a 13.6kWh battery for a total weight up to 500kg, allowing it to reach a speed of 60kph with a range of around 100km.
Final details, its doors open in gull-wing and mirrors give way to a camera.
The Armadillo-T is a first prototype and the team that developed the car says that the road to commercialization is still long...
With a maximum speed of 37 mph, the Armadillo-T can travel 62 miles after 10 minutes of lithium ion battery charging. After parking, the car can be folded in length from 110 inches down to almost half, 65 inches, via a smartphone interface. To save space, cameras replace the side and rear-view mirrors. According to the Korean Wall Street Journal, three Armadillo-Ts can be slotted into a standard sized parking lot in Korea after folding, using the same smartphone application.
(Maybe the video cannot be watched on the country you are.) The article is after the video.
A camera developed at the MIT Media Lab allows photographers to shoot around corners by examining light bouncing off other objects.
The first camera that can take pictures around a corner is shown off by US scientists at the Massachusetts Institute of Technology.
The prototype uses an ultra-short high-intensity burst of laser light to illuminate a scene.
The device constructs a basic image of its surroundings - including objects hidden around the corner - by collecting the tiny amounts of light that bounce around the scene.
The Massachusetts Institute of Technology team believe it has uses in search and rescue and robot vision.
"It's like having x-ray vision without the x-rays," said Professor Ramesh Raskar, head of the Camera Culture group at the MIT Media Lab and one of the team behind the system.
"But we're going around the problem rather than going through it."
Professor Shree Nayar of Columbia University, an expert in light scattering and computer vision, was very complimentary about the work and said it was a new and "very interesting research direction".
"What is not entirely clear is what complexities of invisible scenes are computable at this point," he told BBC News.
"They have not yet shown recovery of an entire [real-world] scene, for instance."
Flash trick Professor Raskar said that when he started research on the camera three years ago, senior people told him it was "impossible".
However, working with several students, the idea is becoming a reality.
The heart of the room-sized camera is a femtosecond laser, a high-intensity light source which can fire ultra-short bursts of laser light that last just one quadrillionth of a second (that's 0.000000000000001 seconds).
The light sources are more commonly used by chemists to image reactions at the atomic or molecular scale.
For the femtosecond transient imaging system, as the camera is known, the laser is used to fire a pulse of light onto a scene.
The light particles scatter and reflect off all surfaces including the walls and the floor.
If there is a corner, some of the light will be reflected around it. It will then continue to bounce around the scene, reflecting off objects - or people - hidden around the bend.
Some of these particles will again be reflected back around the corner to the camera's sensor.
Here, the work is all about timing.
Following the initial pulse of laser light, its shutter remains closed to stop the precise sensors being overwhelmed with the first high-intensity reflections.
“Start Quote
You could generate a map before you go into a dangerous place like a building fire, or a robotic car could use the system to compute the path it should take around a corner before it takes it”
End QuoteProf Ramesh Raskar
This method - known as "time-gating" - is commonly used by cameras in military surveillance aircraft to peer through dense foliage.
In these systems, the shutter remains closed until after the first reflections off the tops of the trees. It then opens to collect resections of hidden vehicles or machinery beneath the canopy.
Similarly, the experimental camera shutter opens once the first reflected light has passed, allowing it to mop up the ever-decreasing amounts of reflected light - or "echoes" as Prof Raskar calls them - from the scene.
Unlike a standard camera that just measures the intensity and position of the light particles as it hits the sensor, the experimental set up also measures the arrival time of the particles at each pixel.
This is the central idea used in so-called "time-of-flight cameras" or Lidar (Light Detection And Ranging) that can map objects in the "line of sight" of the camera.
Lidar is commonly used in military applications and has been put to use by Google's Street View cars to create 3D models of buildings.
Professor Raskar calls his set-up a "time-of-flight camera on steroids".
Both use the speed of light and the arrival time of each particle to calculate the so-called "path length" - or distance travelled - of the light.
To build a picture of a scene, the experimental set up must repeat the process of firing the laser and collecting the reflections several times. Each pulse is done at a slightly different point and takes just billionths of a second to complete.
"We need to do it at least a dozen times," said Professor Raskar. "But the more the better."
It then use complex algorithms - similar to those used in medical CAT scans - to construct a probable 3D model of the surrounding area - including objects that may be hidden around the corner.
"In the same way that a CAT scan can reveal what is inside the body by taking multiple photographs using an x-ray source in different positions, we can recover what is beyond the line of sight by shining the laser at different points on a reflective surface," he said.
Look ahead At the moment, the set-up only works in controlled laboratory conditions and can get confused by complex scenes.
The images produced by the camera are basic
"It looks like they are very far from handling regular scenes," said Prof Nayar.
In everyday situations, he said, the system may compute "multiple solutions" for an image, largely because it relied on such small amounts of light and it was therefore difficult to extrapolate the exact path of the particle as it bounced around a room.
"However, it's a very interesting first step," he said.
It would now be interesting to see how far the idea could be pushed, he added.
Professor Raskar said there are "lots of interesting things you can do with it.
"You could generate a map before you go into a dangerous place like a building fire, or a robotic car could use the system to compute the path it should take around a corner before it takes it."
However, he said, the team initially aim to use the system to build an advanced endoscope.
"It's an easy application to target," he said. "It's a nice, dark environment."
If the team get good results from their trials, he said, they could have a working endoscope prototype within two years.
"That would be something that is room-sized," he said. "Building something portable could take longer."
(Maybe the video cannot be watched on the country you are.)
A camera developed at the MIT Media Lab allows photographers to shoot around corners by examining light bouncing off other objects.
The first camera that can take pictures around a corner is shown off by US scientists at the Massachusetts Institute of Technology.
The prototype uses an ultra-short high-intensity burst of laser light to illuminate a scene.
The device constructs a basic image of its surroundings - including objects hidden around the corner - by collecting the tiny amounts of light that bounce around the scene.
The Massachusetts Institute of Technology team believe it has uses in search and rescue and robot vision.
"It's like having x-ray vision without the x-rays," said Professor Ramesh Raskar, head of the Camera Culture group at the MIT Media Lab and one of the team behind the system.
"But we're going around the problem rather than going through it."
Professor Shree Nayar of Columbia University, an expert in light scattering and computer vision, was very complimentary about the work and said it was a new and "very interesting research direction".
"What is not entirely clear is what complexities of invisible scenes are computable at this point," he told BBC News.
"They have not yet shown recovery of an entire [real-world] scene, for instance."
Flash trick Professor Raskar said that when he started research on the camera three years ago, senior people told him it was "impossible".
However, working with several students, the idea is becoming a reality.
The heart of the room-sized camera is a femtosecond laser, a high-intensity light source which can fire ultra-short bursts of laser light that last just one quadrillionth of a second (that's 0.000000000000001 seconds).
The light sources are more commonly used by chemists to image reactions at the atomic or molecular scale.
For the femtosecond transient imaging system, as the camera is known, the laser is used to fire a pulse of light onto a scene.
The light particles scatter and reflect off all surfaces including the walls and the floor.
If there is a corner, some of the light will be reflected around it. It will then continue to bounce around the scene, reflecting off objects - or people - hidden around the bend.
Some of these particles will again be reflected back around the corner to the camera's sensor.
Here, the work is all about timing.
Following the initial pulse of laser light, its shutter remains closed to stop the precise sensors being overwhelmed with the first high-intensity reflections.
“Start Quote
You could generate a map before you go into a dangerous place like a building fire, or a robotic car could use the system to compute the path it should take around a corner before it takes it”
End QuoteProf Ramesh Raskar
This method - known as "time-gating" - is commonly used by cameras in military surveillance aircraft to peer through dense foliage.
In these systems, the shutter remains closed until after the first reflections off the tops of the trees. It then opens to collect resections of hidden vehicles or machinery beneath the canopy.
Similarly, the experimental camera shutter opens once the first reflected light has passed, allowing it to mop up the ever-decreasing amounts of reflected light - or "echoes" as Prof Raskar calls them - from the scene.
Unlike a standard camera that just measures the intensity and position of the light particles as it hits the sensor, the experimental set up also measures the arrival time of the particles at each pixel.
This is the central idea used in so-called "time-of-flight cameras" or Lidar (Light Detection And Ranging) that can map objects in the "line of sight" of the camera.
Lidar is commonly used in military applications and has been put to use by Google's Street View cars to create 3D models of buildings.
Professor Raskar calls his set-up a "time-of-flight camera on steroids".
Both use the speed of light and the arrival time of each particle to calculate the so-called "path length" - or distance travelled - of the light.
To build a picture of a scene, the experimental set up must repeat the process of firing the laser and collecting the reflections several times. Each pulse is done at a slightly different point and takes just billionths of a second to complete.
"We need to do it at least a dozen times," said Professor Raskar. "But the more the better."
It then use complex algorithms - similar to those used in medical CAT scans - to construct a probable 3D model of the surrounding area - including objects that may be hidden around the corner.
"In the same way that a CAT scan can reveal what is inside the body by taking multiple photographs using an x-ray source in different positions, we can recover what is beyond the line of sight by shining the laser at different points on a reflective surface," he said.
Look ahead At the moment, the set-up only works in controlled laboratory conditions and can get confused by complex scenes.
The images produced by the camera are basic
"It looks like they are very far from handling regular scenes," said Prof Nayar.
In everyday situations, he said, the system may compute "multiple solutions" for an image, largely because it relied on such small amounts of light and it was therefore difficult to extrapolate the exact path of the particle as it bounced around a room.
"However, it's a very interesting first step," he said.
It would now be interesting to see how far the idea could be pushed, he added.
Professor Raskar said there are "lots of interesting things you can do with it.
"You could generate a map before you go into a dangerous place like a building fire, or a robotic car could use the system to compute the path it should take around a corner before it takes it."
However, he said, the team initially aim to use the system to build an advanced endoscope.
"It's an easy application to target," he said. "It's a nice, dark environment."
If the team get good results from their trials, he said, they could have a working endoscope prototype within two years.
"That would be something that is room-sized," he said. "Building something portable could take longer."
A camera developed at the MIT Media Lab allows photographers to shoot around corners by examining light bouncing off other objects.
The first camera that can take pictures around a corner is shown off by US scientists at the Massachusetts Institute of Technology.
The prototype uses an ultra-short high-intensity burst of laser light to illuminate a scene.
The device constructs a basic image of its surroundings - including objects hidden around the corner - by collecting the tiny amounts of light that bounce around the scene.
The Massachusetts Institute of Technology team believe it has uses in search and rescue and robot vision.
"It's like having x-ray vision without the x-rays," said Professor Ramesh Raskar, head of the Camera Culture group at the MIT Media Lab and one of the team behind the system.
"But we're going around the problem rather than going through it."
Professor Shree Nayar of Columbia University, an expert in light scattering and computer vision, was very complimentary about the work and said it was a new and "very interesting research direction".
"What is not entirely clear is what complexities of invisible scenes are computable at this point," he told BBC News.
"They have not yet shown recovery of an entire [real-world] scene, for instance."
Flash trick Professor Raskar said that when he started research on the camera three years ago, senior people told him it was "impossible".
However, working with several students, the idea is becoming a reality.
The heart of the room-sized camera is a femtosecond laser, a high-intensity light source which can fire ultra-short bursts of laser light that last just one quadrillionth of a second (that's 0.000000000000001 seconds).
The light sources are more commonly used by chemists to image reactions at the atomic or molecular scale.
For the femtosecond transient imaging system, as the camera is known, the laser is used to fire a pulse of light onto a scene.
The light particles scatter and reflect off all surfaces including the walls and the floor.
If there is a corner, some of the light will be reflected around it. It will then continue to bounce around the scene, reflecting off objects - or people - hidden around the bend.
Some of these particles will again be reflected back around the corner to the camera's sensor.
Here, the work is all about timing.
Following the initial pulse of laser light, its shutter remains closed to stop the precise sensors being overwhelmed with the first high-intensity reflections.
“Start Quote
You could generate a map before you go into a dangerous place like a building fire, or a robotic car could use the system to compute the path it should take around a corner before it takes it”
End QuoteProf Ramesh Raskar
This method - known as "time-gating" - is commonly used by cameras in military surveillance aircraft to peer through dense foliage.
In these systems, the shutter remains closed until after the first reflections off the tops of the trees. It then opens to collect resections of hidden vehicles or machinery beneath the canopy.
Similarly, the experimental camera shutter opens once the first reflected light has passed, allowing it to mop up the ever-decreasing amounts of reflected light - or "echoes" as Prof Raskar calls them - from the scene.
Unlike a standard camera that just measures the intensity and position of the light particles as it hits the sensor, the experimental set up also measures the arrival time of the particles at each pixel.
This is the central idea used in so-called "time-of-flight cameras" or Lidar (Light Detection And Ranging) that can map objects in the "line of sight" of the camera.
Lidar is commonly used in military applications and has been put to use by Google's Street View cars to create 3D models of buildings.
Professor Raskar calls his set-up a "time-of-flight camera on steroids".
Both use the speed of light and the arrival time of each particle to calculate the so-called "path length" - or distance travelled - of the light.
To build a picture of a scene, the experimental set up must repeat the process of firing the laser and collecting the reflections several times. Each pulse is done at a slightly different point and takes just billionths of a second to complete.
"We need to do it at least a dozen times," said Professor Raskar. "But the more the better."
It then use complex algorithms - similar to those used in medical CAT scans - to construct a probable 3D model of the surrounding area - including objects that may be hidden around the corner.
"In the same way that a CAT scan can reveal what is inside the body by taking multiple photographs using an x-ray source in different positions, we can recover what is beyond the line of sight by shining the laser at different points on a reflective surface," he said.
Look ahead At the moment, the set-up only works in controlled laboratory conditions and can get confused by complex scenes.
The images produced by the camera are basic
"It looks like they are very far from handling regular scenes," said Prof Nayar.
In everyday situations, he said, the system may compute "multiple solutions" for an image, largely because it relied on such small amounts of light and it was therefore difficult to extrapolate the exact path of the particle as it bounced around a room.
"However, it's a very interesting first step," he said.
It would now be interesting to see how far the idea could be pushed, he added.
Professor Raskar said there are "lots of interesting things you can do with it.
"You could generate a map before you go into a dangerous place like a building fire, or a robotic car could use the system to compute the path it should take around a corner before it takes it."
However, he said, the team initially aim to use the system to build an advanced endoscope.
"It's an easy application to target," he said. "It's a nice, dark environment."
If the team get good results from their trials, he said, they could have a working endoscope prototype within two years.
"That would be something that is room-sized," he said. "Building something portable could take longer."