Aprende A Jugar Al Poker Con Los Pelayos Pdf Writer

Posted on by

Descargar Aprende a jugar al poker con los pelayos y muchas. INTERNET CON LOS PELAYOS pdf e. Book; APRENDE A JUGAR AL POQUER EN. Romance writer. Incident to the divorce, the wife has the option of returning to the use of her maiden name. The decision to return to a maiden name is highly personal.

DESKTOP MACHINE CARVES METAL AND WOOD LIKE BUTTER Written By: How many desktop 3D printers have we seen on Kickstarter in recent years? Too many to count. But 3D printing is only half of the digital manufacturing promise. Where 3D printing is additive—CNC machines, guided by digital designs, subtract material. Give a CNC machine a digital file, and it’ll painstakingly sculpt your design from a solid block of material like some kind of robotic Leonardo Da Vinci. But most CNC machines are big and expensive. They aren’t typically available to your average maker or tinkerer.

Or if they are, they’re kits requiring assembly. Now, however, a is aiming to remedy the situation by offering an affordable, pre-assembled desktop carving machine called Carvey. Carvey is an enclosed desktop CNC router. It accommodates a range of milling bits, has a build area of a foot by eight inches, carves up to a depth of 2.75 inches, and works with dozens of materials including woods, soft metals, plastics, waxes, and foams.

Aprende A Jugar Al Poker Con Los Pelayos Pdf Writer

The machine uses its own proprietary web app, Easel, or the CAD, CAM, and machine control software of your choice. In Easel, users draw a 2D design, the software converts it to 3D, and after selecting a material, the machine carves away. What might one make with Carvey? The campaign shows silver jewelry, acetate and wood sunglass, a fiberboard speaker box, a walnut and silver metallic acrylic address sign, and an acrylic and birch circuit board and electronics enclosure. (And why couldn’t you download a file for a simple tool, say a wrench, or a replacement part and fabricate it at home?) The campaign has raised almost five times its $50,000 goal with nearly a month to go. The team say they’ve been developing Carvey for over a year and a half.

They have a working prototype, and the Kickstarter will fund a manufacturing run. Early backers can get a machine for $1,999, later backers will pay $2,399.

They’re aiming to fulfill orders by this time next year—but it’s a complex project, so, grain of salt. Also, although Carvey’s software seems much more user friendly than standard 3D modeling software, we wonder if it’s still less for the average weekend crafter, more for makers with experience in design programs and trouble-shooting in the workshop. And Carvey isn’t set up to do full 3D—so, don’t expect to from a hunk of metal. Five-axis routers, like the one from Daishin in the video below, still cost into the hundreds of thousands of dollars.

All that said? This is a pretty cool idea. Desktop 3D printers are stuck on one material. They’re relatively slow.

And the plastic they use is expensive. Carvey, on the other hand, is multi-material, looks to be pretty fast, and uses conventional materials at, presumably, conventional prices. Sure, consumer 3D printing is, there are items you can’t make any other way, and Carvey may well have shortcomings that aren’t readily apparent. But it’s awesome that, for the cost of an (expensive) laptop, you could plug that same computer into a machine that precision-carves a solid block of metal in your den or garage. Image Credit: This entry was posted in,,,, and tagged,,,,,,,.

Software development for electronics by IBM The electronics industry has seen dramatic changes – customers have demanded accelerated product innovation, forcing the compression of product life cycles. The software embedded in these devices and systems has driven this innovation, but is challenged by development complexity. How can you revamp your engineering design and development processes to remain profitable? This whitepaper discusses key factors in accelerating innovation and time to market for electronic devices, focusing on embedded software development. View now to see essential takeaways, including: • Accelerate development cycles by reusing existing intellectual property • Reduce costs with the automation of production code generation • And more Please read the attached whitepaper. The Physics of Interstellar Travel To one day, reach the stars.

Michio Kaku When discussing the possibility of interstellar travel, there is something called “the giggle factor.” Some scientists tend to scoff at the idea of interstellar travel because of the enormous distances that separate the stars. According to Special Relativity (1905), no usable information can travel faster than light locally, and hence it would take centuries to millennia for an extra-terrestrial civilization to travel between the stars.

Even the familiar stars we see at night are about 50 to 100 light years from us, and our galaxy is 100,000 light years across. The nearest galaxy is 2 million light years from us. The critics say that the universe is simply too big for interstellar travel to be practical. Similarly, investigations into UFO’s that may originate from another planet are sometimes the “third rail” of someone’s scientific career.

There is no funding for anyone seriously looking at unidentified objects in space, and one’s reputation may suffer if one pursues an interest in these unorthodox matters. In addition, perhaps 99% of all sightings of UFO’s can be dismissed as being caused by familiar phenomena, such as the planet Venus, swamp gas (which can glow in the dark under certain conditions), meteors, satellites, weather balloons, even radar echoes that bounce off mountains. (What is disturbing, to a physicist however, is the remaining 1% of these sightings, which are multiple sightings made by multiple methods of observations.

Some of the most intriguing sightings have been made by seasoned pilots and passengers aboard air line flights which have also been tracked by radar and have been videotaped. Sightings like this are harder to dismiss.) But to an astronomer, the existence of intelligent life in the universe is a compelling idea by itself, in which extra-terrestrial beings may exist on other stars who are centuries to millennia more advanced than ours. Within the Milky Way galaxy alone, there are over 100 billion stars, and there are an uncountable number of galaxies in the universe.

About half of the stars we see in the heavens are double stars, probably making them unsuitable for intelligent life, but the remaining half probably have solar systems somewhat similar to ours. Although none of the over 100 extra-solar planets so far discovered in deep space resemble ours, it is inevitable, many scientists believe, that one day we will discover small, earth-like planets which have liquid water (the “universal solvent” which made possible the first DNA perhaps 3.5 billion years ago in the oceans). The discovery of earth-like planets may take place within 20 years, when NASA intends to launch the space interferometry satellite into orbit which may be sensitive enough to detect small planets orbiting other stars. So far, we see no hard evidence of signals from extra-terrestrial civilizations from any earth-like planet. The SETI project (the search for extra-terrestrial intelligence) has yet to produce any reproducible evidence of intelligent life in the universe from such earth-like planets, but the matter still deserves serious scientific analysis. The key is to reanalyze the objection to faster-than-light travel. A critical look at this issue must necessary embrace two new observations.

First, Special Relativity itself was superceded by Einstein’s own more powerful General Relativity (1915), in which faster than light travel is possible under certain rare conditions. The principal difficulty is amassing enough energy of a certain type to break the light barrier. Second, one must therefore analyze extra-terrestrial civilizations on the basis of their total energy output and the laws of thermodynamics. In this respect, one must analyze civilizations which are perhaps thousands to millions of years ahead of ours. The first realistic attempt to analyze extra-terrestrial civilizations from the point of view of the laws of physics and the laws of thermodynamics was by Russian astrophysicist Nicolai Kardashev.

He based his ranking of possible civilizations on the basis of total energy output which could be quantified and used as a guide to explore the dynamics of advanced civilizations: Type I: this civilization harnesses the energy output of an entire planet. Type II: this civilization harnesses the energy output of a star, and generates about 10 billion times the energy output of a Type I civilization. Type III: this civilization harnesses the energy output of a galaxy, or about 10 billion time the energy output of a Type II civilization. A Type I civilization would be able to manipulate truly planetary energies.

They might, for example, control or modify their weather. They would have the power to manipulate planetary phenomena, such as hurricanes, which can release the energy of hundreds of hydrogen bombs. Perhaps volcanoes or even earthquakes may be altered by such a civilization. A Type II civilization may resemble the Federation of Planets seen on the TV program Star Trek (which is capable of igniting stars and has colonized a tiny fraction of the near-by stars in the galaxy). A Type II civilization might be able to manipulate the power of solar flares.

A Type III civilization may resemble the Borg, or perhaps the Empire found in the Star Wars saga. They have colonized the galaxy itself, extracting energy from hundreds of billions of stars. By contrast, we are a Type 0 civilization, which extracts its energy from dead plants (oil and coal). Growing at the average rate of about 3% per year, however, one may calculate that our own civilization may attain Type I status in about 100-200 years, Type II status in a few thousand years, and Type III status in about 100,000 to a million years.

These time scales are insignificant when compared with the universe itself. On this scale, one may now rank the different propulsion systems available to different types of civilizations: Type 0 • Chemical rockets • Ionic engines • Fission power • EM propulsion (rail guns) Type I • Ram-jet fusion engines • Photonic drive Type II • Antimatter drive • Von Neumann nano probes Type III • Planck energy propulsion Propulsion systems may be ranked by two quantities: their specific impulse, and final velocity of travel. Specific impulse equals thrust multiplied by the time over which the thrust acts. At present, almost all our rockets are based on chemical reactions.

We see that chemical rockets have the smallest specific impulse, since they only operate for a few minutes. Their thrust may be measured in millions of pounds, but they operate for such a small duration that their specific impulse is quite small. NASA is experimenting today with ion engines, which have a much larger specific impulse, since they can operate for months, but have an extremely low thrust. For example, an ion engine which ejects cesium ions may have the thrust of a few ounces, but in deep space they may reach great velocities over a period of time since they can operate continuously.

They make up in time what they lose in thrust. Eventually, long-haul missions between planets may be conducted by ion engines. For a Type I civilization, one can envision newer types of technologies emerging.

Ram-jet fusion engines have an even larger specific impulse, operating for years by consuming the free hydrogen found in deep space. However, it may take decades before fusion power is harnessed commercially on earth, and the proton-proton fusion process of a ram-jet fusion engine may take even more time to develop, perhaps a century or more. Laser or photonic engines, because they might be propelled by laser beams inflating a gigantic sail, may have even larger specific impulses. One can envision huge laser batteries placed on the moon which generate large laser beams which then push a laser sail in outer space. This technology, which depends on operating large bases on the moon, is probably many centuries away. For a Type II civilization, a new form of propulsion is possible: anti-matter drive.

Matter-anti-matter collisions provide a 100% efficient way in which to extract energy from mater. Dmc Nero 3d Model Download here. However, anti-matter is an exotic form of matter which is extremely expensive to produce. The atom smasher at CERN, outside Geneva, is barely able to make tiny samples of anti-hydrogen gas (anti-electrons circling around anti-protons). It may take many centuries to millennia to bring down the cost so that it can be used for space flight.

Given the astronomical number of possible planets in the galaxy, a Type II civilization may try a more realistic approach than conventional rockets and use nano technology to build tiny, self-replicating robot probes which can proliferate through the galaxy in much the same way that a microscopic virus can self-replicate and colonize a human body within a week. Such a civilization might send tiny robot von Neumann probes to distant moons, where they will create large factories to reproduce millions of copies of themselves. Such a von Neumann probe need only be the size of bread-box, using sophisticated nano technology to make atomic-sized circuitry and computers. Then these copies take off to land on other distant moons and start the process all over again. Such probes may then wait on distant moons, waiting for a primitive Type 0 civilization to mature into a Type I civilization, which would then be interesting to them. (There is the small but distinct possibility that one such probe landed on our own moon billions of years ago by a passing space-faring civilization. This, in fact, is the basis of the movie 2001, perhaps the most realistic portrayal of contact with extra-terrrestrial intelligence.) The problem, as one can see, is that none of these engines can exceed the speed of light.

Hence, Type 0,I, and II civilizations probably can send probes or colonies only to within a few hundred light years of their home planet. Even with von Neumann probes, the best that a Type II civilization can achieve is to create a large sphere of billions of self-replicating probes expanding just below the speed of light. To break the light barrier, one must utilize General Relativity and the quantum theory. This requires energies which are available for very advanced Type II civilization or, more likely, a Type III civilization.

Special Relativity states that no usable information can travel locally faster than light. One may go faster than light, therefore, if one uses the possibility of globally warping space and time, i.e.

General Relativity. In other words, in such a rocket, a passenger who is watching the motion of passing stars would say he is going slower than light. But once the rocket arrives at its destination and clocks are compared, it appears as if the rocket went faster than light because it warped space and time globally, either by taking a shortcut, or by stretching and contracting space. There are at least two ways in which General Relativity may yield faster than light travel. The first is via wormholes, or multiply connected Riemann surfaces, which may give us a shortcut across space and time. One possible geometry for such a wormhole is to assemble stellar amounts of energy in a spinning ring (creating a Kerr black hole).

Centrifugal force prevents the spinning ring from collapsing. Anyone passing through the ring would not be ripped apart, but would wind up on an entirely different part of the universe.

This resembles the Looking Glass of Alice, with the rim of the Looking Glass being the black hole, and the mirror being the wormhole. Another method might be to tease apart a wormhole from the “quantum foam” which physicists believe makes up the fabric of space and time at the Planck length (10 to the minus 33 centimeters). The problems with wormholes are many: a) one version requires enormous amounts of positive energy, e.g. A black hole. Positive energy wormholes have an event horizon(s) and hence only give us a one way trip. One would need two black holes (one for the original trip, and one for the return trip) to make interstellar travel practical. Most likely only a Type III civilization would be able harness this power.

B) wormholes may be unstable, both classically or quantum mechanically. They may close up as soon as you try to enter them.

Or radiation effects may soar as you entered them, killing you. C) one version requires vast amounts of negative energy. Negative energy does exist (in the form of the Casimir effect) but huge quantities of negative energy will be beyond our technology, perhaps for millennia. The advantage of negative energy wormholes is that they do not have event horizons and hence are more easily transversable.

D) another version requires large amounts of negative matter. Unfortunately, negative matter has never been seen in nature (it would fall up, rather than down). Any negative matter on the earth would have fallen up billions of years ago, making the earth devoid of any negative matter. The second possibility is to use large amounts of energy to continuously stretch space and time (i.e. Contracting the space in front of you, and expanding the space behind you). Since only empty space is contracting or expanding, one may exceed the speed of light in this fashion. (Empty space can warp space faster than light.

For example, the Big Bang expanded much faster than the speed of light.) The problem with this approach, again, is that vast amounts of energy are required, making it feasible for only a Type III civilization. Energy scales for all these proposals are on the order of the Planck energy (10 to the 19 billion electron volts, which is a quadrillion times larger than our most powerful atom smasher). Lastly, there is the fundamental physics problem of whether “topology change” is possible within General Relativity (which would also make possible time machines, or closed time-like curves). General Relativity allows for closed time-like curves and wormholes (often called Einstein-Rosen bridges), but it unfortunately breaks down at the large energies found at the center of black holes or the instant of Creation. For these extreme energy domains, quantum effects will dominate over classical gravitational effects, and one must go to a “unified field theory” of quantum gravity. At present, the most promising (and only) candidate for a “theory of everything”, including quantum gravity, is superstring theory or M-theory. It is the only theory in which quantum forces may be combined with gravity to yield finite results.

No other theory can make this claim. With only mild assumptions, one may show that the theory allows for quarks arranged in much like the configuration found in the current Standard Model of sub-atomic physics. Because the theory is defined in 10 or 11 dimensional hyperspace, it introduces a new cosmological picture: that our universe is a bubble or membrane floating in a much larger multiverse or megaverse of bubble-universes.

Unfortunately, although black hole solutions have been found in string theory, the theory is not yet developed to answer basic questions about wormholes and their stability. Within the next few years or perhaps within a decade, many physicists believe that string theory will mature to the point where it can answer these fundamental questions about space and time. The problem is well-defined. Unfortunately, even though the leading scientists on the planet are working on the theory, no one on earth is smart enough to solve the superstring equations. Conclusion Most scientists doubt interstellar travel because the light barrier is so difficult to break. However, to go faster than light, one must go beyond Special Relativity to General Relativity and the quantum theory. Therefore, one cannot rule out interstellar travel if an advanced civilization can attain enough energy to destabilize space and time.

Perhaps only a Type III civilization can harness the Planck energy, the energy at which space and time become unstable. Various proposals have been given to exceed the light barrier (including wormholes and stretched or warped space) but all of them require energies found only in Type III galactic civilizations. On a mathematical level, ultimately, we must wait for a fully quantum mechanical theory of gravity (such as superstring theory) to answer these fundamental questions, such as whether wormholes can be created and whether they are stable enough to allow for interstellar travel. China develops its first homegrown server amid cybersecurity concerns The new servers could potentially lessen China's reliance on foreign technology by Michael Kan A Chinese company has developed the country's first homegrown servers, built entirely out of domestic technologies including a processor from local chip maker Loongson Technology. China's Dawning Information Industry, also known as Sugon, has developed a series of the country's state-run Xinhua News Agency reported Thursday. 'Servers are crucial applications in a country's politics, economy, and information security.

We must fully master all these technologies,' Dawning's vice president Sha Chaoqun was quoted as saying. The servers, including their operating systems, have all been developed from Chinese technology. The inside them has eight cores made with a total of 1.1 billion transistors built using a 28-nanometer production process. The Xinhua report quoted Li Guojie, a top computing researcher in the country, as saying the new servers would ensure that the security around China's military, financial and energy sectors would no longer be in foreign control.

Dawning was contacted on Friday, but an employee declined to offer more specifics about the servers. 'We don't want to promote this product in the U.S. Media,' she said. 'It involves propriety intellectual property rights, and Chinese government organizations.'

News of the servers has just been among the ongoing developments in China for the country to build up its own homegrown technology. Work is being done on local mobile operating systems,, and in chip making, with much of it government-backed. Earlier this year, China outlined a plan to make the country into a major player in the semiconductor space. But it also comes at a time when cybersecurity has become a major concern for the Chinese government, following revelations about the U.S. Government's own secret surveillance programs. 'Without cybersecurity there is no national security,' declared China's Xi Jinping in March, as he announced plans to turn the country into an Two months later, China from selling IT products to the country if they failed to pass a new vetting system meant to comb out secret spying programs. Dawning, which was founded using local government-supported research, is perhaps best known for developing some of China's supercomputers.

But it also sells server products built with Intel chips. In this year's first quarter, it had an 8.7 percent share of China's server market, putting it in 7th place, according to research firm IDC. Selling servers, however, isn't an easy business, said Rajnish Arora, an analyst with IDC. It requires billions in investment, not simply in hardware, but also to develop the software applications that businesses want to run, he said. IBM, for example, has sold off its low-end server business to Lenovo, and is to take over its semiconductor manufacturing, as a way to improve earnings.

The move raises as many questions as it answers: 'Does Dawning have the business volume to sustain the investment or is this something the Chinese government is going to support?' 'Is this more about whipping up nationalist feelings for a server platform?' NEW HYBRID SOLAR CELL BATTERY TAKES AIM AT SOLAR POWER’S ENERGY STORAGE PROBLEM Written By: As the world seeks alternatives to fossil fuels, scientists, entrepreneurs and government leaders are pushing to develop cheap, clean energy. Wind-harnessing turbines are increasingly found in many parts of the world. Solar panels can be seen on more and more rooftops as budget and energy-conscious homeowners take advantage of government subsidies for renewable energy sources. However, renewable energy has yet to reach the level of increased efficiency and lower cost needed to compete with fossil fuels. With this in mind, researchers at Ohio State University recently announced their creation of a that can act both as a solar cell, producing energy from sunlight, and as a battery storing that energy.

The new device, the brainchild of Dr. Yiying Wu, a professor of chemistry and biochemistry, may overcome some limitations in both solar cell and battery technology. They in the journal Nature Communicationsearlier this month. A shortcoming of solar panels is the loss of energy production on overcast days or at night. (This is the same issue for wind turbines on windless days.) Most homeowners with rooftop solar panels add excess daytime-produced energy to the local grid. Then in the evenings or on overcast days, they buy energy back from the local utility.

While this system isn’t ideal, the lack of an efficient battery system to store the excess energy necessitates it. For a home to truly be energy-independent, it would need to produce and store energy for later use. Wu’s patent-pending device, however, may bring us one step closer to the development of a system where efficient, decentralized power generation is the norm. One way the device improves on current systems is by nearly eliminating inefficiency in energy transfer. Usually, up to 20% of the energy produced by a solar cell is lost as it travels to and charges a battery. But since this new device combines the solar cell and battery into one device, nearly 100% of the energy produced can be stored.

Another advancement in the device is how the energy is stored, using a next generation lithium battery called lithium-air (Li-air) or lithium-oxygen battery. Most lithium batteries in use today are lithium-ion (Li-ion) batteries, which can be found in everything from consumer electronics to electric vehicles. They work by the movement of lithium ions from the negative electrode to the positive electrode during discharge.

As they’re charged, the lithium ions move back to the negative electrode. Compared to the older nickel-metal hydride (NiMH) and lead-acid batteries, the advent of Li-ion batteries brought many advantages. They have a much higher energy density, meaning they can store. Li-ion batteries also lose their charge at a much slower rate when not in use. But Li-ion batteries aren’t without shortcomings either. They can have a short lifespan despite withstanding many charge/discharge cycles and are temperature sensitive.

Use or storage of Li-ion batteries in high temperatures results in their rapid degradation. Because of these and other limitations, some researchers have focused on refining Li-air battery technology since as early as the 1970s. These batteries use lithium at the anode and oxygen (from air) at the cathode to create current, and they can have up to 15 times the energy density of Li-ion batteries—matching the energy density of gasoline. This means that their use in electric vehicles could potentially increase the driving range to over 500 miles on a single charge, directly rivaling the range of most gas-powered cars. IBM even has a program called to develop such range for an EV using an Li-air battery. Previously, Dr.

Wu and his team developed a potassium-air battery (K-air) from Ohio State University and the Department of Energy. The K-air battery packed a much higher energy density than Li-ion batteries and was shown to be cheap to produce and almost 100% energy-efficient without producing toxic byproducts. The Li-air battery subsequently developed by Dr. Wu’s group was based on the design of the K-air battery, essentially substituting lithium for potassium. The mesh made the solar cell permeable to air while the rods were treated to capture sunlight. The capture of sunlight produces electrons that will decompose lithium peroxide into lithium ions, thereby charging the battery. During discharge of the battery, oxygen from the air is used to replenish the lithium peroxide.

So far, tests of this hybrid device have shown promise in terms of reliability and energy efficiency. The researchers will continue refining their device and experimenting with new materials to improve performance. They hope to license the technology to companies for further development and, eventually, to bring the device to market.

Even though the idea of a Li-air battery was proposed in the 1970s, little progress was made until the mid-1990s when new advanced materials made these batteries feasible. Just like any battery technology, Li-air batteries have their own set of challenges that must be overcome before large-scale use. But with over 400 research articles published in the past four years, the field continues to show promise. The use of fossil fuels has driven our technological evolution for over 100 years. However, we now realize that there is a price for this evolution.

Earth is our only home; the only planet we know to sustain life. And while this new device is just one small step, it moves in the right direction, to a future where abundant, reliable, and clean energy is the standard. Image Credit:; This entry was posted in, and tagged,,,,,,,,,. SERVICE ROBOTS WILL NOW ASSIST CUSTOMERS AT LOWE’S STORE Written By: Most folks don’t interact with robots in their daily lives, so unless you work in a factory, the tech can seem remote. But if you’re a San Jose local? Welcome to the future.

Orchard Supply Hardware a pair of bots to greet and engage customers. Beginning this holiday season, the robots, dubbed OSHbot, will employ a suite of new technologies to field simple customer questions, identify items, search inventory, act as guides, and even summon Orchard Supply Hardware experts for a video chat. OSHbot was developed in a collaboration between Orchard Supply Hardware’s parent company,, and robotics startup. Corporate groups, like Lowe’s Innovation Labs, join Singularity University Labs to extend their horizons, get a feel for technologies in the pipeline, and strike up mutually beneficial partnerships with startups immersed in those technologies.

“Lowe’s Innovation Labs is here to build new technologies to solve consumer problems with uncommon partners,” says Kyle Nel, executive director of Lowe’s Innovation Labs. “We focus on making science fiction a reality.” The five-foot-tall gleaming white OSHbot has two video monitors, two lasers for navigation and obstacle avoidance, a 3D scanner (akin to Kinect, we imagine), natural language processing, and a set of wheels to navigate the store.

Customers walk up to OSHbot and ask where they can find a particular item, or if they don’t know the item’s name, they can show it to the 3D scanner. OSHbot matches it up with inventory and autonomously leads the customer up the right aisle, using its onboard sensors to navigate the store and avoid obstacles. As the robot works, it creates a digital map of its environment and compares that map to the store’s official inventory map. Of course, memorizing long lists and locations is a skill particularly well suited to machines, and something humans struggle to do. But humans are still a key part of the experience. If a customer has a more complicated question, perhaps advice on a home improvement project or a product comparison, OSHbot is equipped to wirelessly connect to experts at other Orchard Supply Hardware stores for live video chat.

The robot speaks multiple languages—witness its fine Spanish in the video—and we think its video interface might prove a great helper for hearing impaired customers. OSHbot is indeed cool—but it isn’t the first service robot we’ve seen. In 2012, we covered a Korean robot, FURO, that answered traveler questions in multiple languages and served as roving billboard in a Brazilian airport. Even further back, in 2010, we wrote about PAL robotics’ Rheem-H1 robot mall guide.

OSHbot isn’t the first service robot to employ autonomous navigation and obstacle avoidance either. Indeed, the RP-Vita robot, made by iRobot and InTouchHealth, is already traversing hospital hallways, connecting distant doctors with patients by video.

But OSHbot is significant for a few other reasons. For one, it’s being adopted by Lowe’s, a big established firm in a sector of the economy—lumber, tools, and screws—you might not associate with robotics. Lowe’s hiring robots is akin to office supply chain, Staples, announcing or UPS stores to customers. Just as 3D printing is doggedly entering the mainstream, so too is robotics. Also, OSHbot ties together a number of technologies in a clever new package.

That laser guidance system? It’s not so different from the tech used in Google’s self-driving cars. And 3D scanning? We’ve seen it in gaming, but recently it’s been miniaturized in Leap Motion’s infrared gesture controls.

When we first saw Project Tango smartphones with 3D scanning hardware, it wouldn’t be long before it appeared in robots. Indeed, one early adopter strapped a. Now OSHbot is using similar tech to model and identify nails, screws, and tools in the hardware world. And there’s room for improvement.

Instead of a static creation, think of OSHbot as a kind of service platform on which its makers can hang other useful tech gadgetry. Paired with 3D scanning capability, Marco Mascorro, CEO of Fellow Robots, suggests a future version to make parts on the spot. We imagine other hardware might include a credit card scanner for checkout or NFC for mobile payments (think Apple Pay). It’d be just like those roving bands of Apple store employees with iPads—only, you know, with robots.

And why not add face detection software akin to? These programs could allow the robot to gauge a customer’s attentiveness and even basic emotions.

If the customer looks confused, the software would recognize the expression and ask if they need more specific help finding an item. Or perhaps the robot got it wrong, and they need to be guided to a different product altogether. We think OSHbot has lots of potential—but it’s still a new creation. The goal in San Jose is to put its potential to the test in the real world. There is no better way to find bugs in a system than daily interaction with the public.

We expect there might be a few glitches (perhaps even comical ones). Voice recognition and natural language processing, for example, are vastly improved but still imperfect. Also, the robot’s price tag will matter for wider adoption.

Similar robots run into the tens of thousands of dollars, not including maintenance costs. But the trend in robotics has been rapidly falling prices—and a few (even pricey) robots might not only ease the burden on human employees, but attract a few new customers to boot. Will OSHbot and other customer service robots increasingly make their way into our everyday lives? But fear not—they’re here to help. Image Credit: This entry was posted in,, and tagged,,,,,,,,,,,,,,,,,,,.

Facebook experiment points to data ethics hurdles in digital research by A controversial Facebook research study that came to light this year provides fodder for discussions on the ethical issues involved in digital experimentation efforts. Digital experimentation is a flowering field. But a Facebook experiment that came into the public eye earlier this year has cast a harsh light on the practice. Many people see the Web as a big laboratory for research, it's true. But digital experimentation will not get a free ticket to ride. Ethics issues -- ones that may come to affect the work of data management and analytics professionals -- lurk in the fabric of the new techniques.

Facebook discovered that after news emerged about a study it quietly conducted with Cornell University researchers in 2012. The social networking company altered the regular news feeds of some of its users for one week, showing one set of users happy, positive posts while another set saw dreary, negative missives. The results were contagious: Happiness begat happiness and sadness spawned gloom. Measuring emotional contagion Unlike in, though, the participants weren't explicitly made aware that they were being studied. Few were the wiser until the Cornell crew published a paper titled 'Experimental evidence of massive-scale emotional contagion through social networks' in the June 17 issue of the journal Proceedings of the National Academy of Sciences.

Contagion is catchy: The New York Post news desk could scarcely have come up with a punchier headline. But the study proved to be a matter of contention. Many people found it troubling that Facebook made guinea pigs of users. Their only warning was some arcana buried in a one-click. The Facebook study also stands out because it mixed two usually distinct types of research: a company trying to fine-tune a product and scientists trying to test hypotheses.

The blending has helped carry discussion of Facebook's experiment in manipulation beyond and news organizations. That was underscored last month at the at MIT, where the experiment was among the topics being talked about. Spotlight on digital experimentation Speakers at the conference discussed advanced -- efficient exponential experimentation, crowd self-organization, mobile advertising effectiveness, online experiment design, and the now-perennial favorite of data analytics cognoscenti: causation and correlation.

It was clear that digital experimentation currently is something largely done by big Internet companies trying to improve their online offerings. But Web-based medical research was also on the program. A CODE panel on experimentation and ethical practice included Leslie Meltzer Henry, an associate professor of law at the University of Maryland who, together with a colleague, has written an open letter to Maryland's attorney general urging legal action against Facebook over its experiment. While acknowledging the potential benefits of digital research, Henry thinks online research like the Facebook study should be held to some of the same standards required by government-sponsored clinical trials. 'I start from the position that large-scale digital experimentation is here to stay,' she said. 'It can be good. That said, I do think we have to be respectful of the subjects.'

What makes the Facebook experiment unethical, in her opinion, was not explicitly seeking subjects' approval at the time of the study. While they shared some criticisms, other panelists steered away from the idea of imposing clinical-style research requirements.

They looked to put Facebook's activity in the context of 'manipulative' advertising -- on the Web and elsewhere -- and news outlets that select stories and write headlines in a way that's designed to exploit emotional responses by readers. An underlying concern was placing strictures on large websites that aggressively. The Facebook study was 'unusual in the way it brought Cornell in,' said Esther Dyson, the veteran technology journalist and venture capitalist whose present efforts include HICCup, a Simsbury, Conn. Company that seeks to use small U.S. Towns as laboratories for health improvements. The line between scientific research and Web marketing should be clear, Dyson suggested.

But she thinks users have a responsibility, too. People must understand when and how they're being manipulated, she told the conference attendees. 'The best thing for all of this is a lot more education so people understand what is happening and how they are being manipulated.' Data dilemmas to work through There are emerging medical research alternatives that also hold issues of data ethics. Because big data systems are able to trawl through masses of forms data like never before, analysis of historical -- a form of evidence-based research -- is being considered by some doctors looking to better diagnose and treat diseases. But a rheumatologist at Stanford University's Lucile Packard Children's Hospital who searched past records to adjust treatment for a patient with lupus was later warned against pursuing such methods by administrators concerned about. This method encounters ethics issues too.

The backdrop for all this activity is a general feeling that the methodology of the traditional clinical trial has run its course. Such trials are expensive. They sometimes rely on very sparse data.

A lot of statistical conniptions are used to backfill, but often the results are not reproducible. Evidence-based research and digital experimentation could play a useful role in moving science, medicine and product development forward. But whether the goal is product improvement or scientific advancement, there will be to sort through -- and data professionals will eventually be called on to join in the sorting efforts. Jack Vaughan is SearchDataManagement's news and site editor. Email him at, and follow us on Twitter:.

Summit Europe: Chip Implants Easy as Piercings BY “I am bleeding just a little bit,” said Raymond McCauley. “Might I ask for a little assistance?” McCauley, chair of Singularity University’s biotechnology and bioinformatics track and a biohacker, had just implanted a microchip in his hand. He was giving his talk at on the $0.01 human genome, drag-and-drop genetic engineering, garage biohacking, cheese from genetically modified yeast—a whirlwind tour of future biotech. But the real thrust of this particular talk? McCauley is aware that the idea of incorporating technology into our bodies may seem repellent or unnatural to some people. But, he said, many of us are already cyborgs. Vaccination, for example, is a kind of technological augmentation.

“We are now different than we would be as just baseline humanity,” he said. McCauley is a hacker at heart and has no qualms about experimenting on himself to prove a point. So, in the middle of his talk, he called piercing professional Tom van Oudenaarde onstage and anounced he’d be implanting a chip in his hand. Was he nervous? But by all accounts the procedure was quick and relatively painless.

In fact, so much so that Singularity University cofounder, Peter Diamandis, walked onstage an hour later and got chipped too. The chip— encased in a biocompatible glass cylinder the size of a grain of rice—was implanted in a three-minute procedure, start to finish, and left a small puncture wound and a bit of soreness. To be clear, neither the technology nor the procedure is particularly novel. Vets have been implanting pets and livestock with RFID chips for a couple decades, and human RFID implants have been happening since at least the mid-2000s.

Flash Install Fedora 21. So, why get one? According to Diamandis, it was a spur-of-the-moment experiment to see how he’d feel having a piece of technology in his body.

But he thinks that implantables, in general, could offer much more. “In all honesty, I think biohacking, the cyborg human, is an eventuality that will materialize when the value proposition gets high enough,” Diamandis wrote about his new implant.

RFID chips are passive bits of hardware powered and activated when near an RFID reader. Most people have experienced them at one time or another—cards granting access to an office or onto the subway or a bus.

Diamandis suggests near-term uses of RFID implants might be smooth interaction with the Internet of Things. We could use our hands to unlock doors, start the car, and pay for coffee. McCauley says we might keep contact information on our chip, swap said information by shaking hands—like an embedded business card. Some of these applications are still in the future. The number of connected devices in our everyday lives are yet minimal enough that most of us wouldn’t get much use out of an embedded chip. And whether embedding it would be an improvement on keeping it somewhere outside our bodies, like on a card or in our phone, is an open question.

That said, the number of devices we might control with an implant is set to grow in the coming years. And the truly compelling “value proposition” may lie elsewhere—in health and medicine. Currently, health monitoring is a prime argument for incorporating technology into our bodies. Known as quantified self, it’s thought that the more we know (thanks to sensors), the more we can do to prevent disease before it’s too late.

Wearable sensors, for example, have been popular in tech recently. Most of these are the kind that go around our wrists and track activity, heart rate, and the like. But there are others. McCauley, noted wireless earphones that measure vitals. However, it’s increasingly recognized that there are problems with wearables.

They’re easily forgotten at home or simply laid aside. They lack accuracy. They can measure some vital signs, but certainly not all. One potential solution we’ve explored is. Indeed, an undershirt or pair of underwear or are less likely to be left behind in the morning.

You can see where all this is going—the more intimately connected, the more useful. Perhaps the next steps are implantables and insideables. The fact is, if it’s a matter of better health, staying outside the hospital, maybe even living longer—people are likely to be more amenable to the idea of putting tech in their bodies.

Pacemakers are a contemporary example, and implantable insulin pumps are just around the corner. Health benefits of these implantables make them acceptable. But beyond dire needs, like everyday body monitoring of healthy people, widespread use will likely require smaller, less invasive devices. And there’s reason to believe miniaturization won’t end with a chip the size of a grain of rice. Google recently made headlines for its project—still largely theoretical—and earlier this year, for their work on in diabetics.

Others are working on body-monitoring skin patches and tattoos. And we’re learning to power miniature electronics at a. The day after getting his implant, McCauley returned to the stage to report.

He was just fine he assured the audience. More than fine in fact. He’d made a video showing off his newly acquired powers.

Even today’s implantable tech is relatively non-invasive and somewhat useful—two properties that will only improve in the future. “This is how much we care about science,” McCauley quipped. “We bleed for you.” Perhaps RFID implants like these won’t ever catch on in a big way. But it raises a compelling question: If implantable technology were no more invasive than a vaccination—where would you draw the line? Or would there even be a line?

Image Credit. Summit Europe: To Anticipate the Future Is to Abandon Intuition BY In the evolution of information technology, acceleration is the rule—and this fact isn’t easy for the human brain to grasp. You’d be hard pressed to find someone who isn’t at least intuitively aware of the speed of information technology.

We’ve become used to the idea that the performance of our devices has regularly doubled for the last few decades. What is less intuitive is the rate at which this doubling results in massive leaps. The price performance of today’s devices is a than computers in 1980. But even this is not completely outside the realm of immediate experience.

We know even our smartphones are much more capable than the first computers we owned. It’s here, however, that intuition fails us completely. Over the course of the this week, two reasons emerged from the slew of talks.

Exponential doublings start slow before making extremely rapid progress in just a few steps. First, the exponential growth in computing isn’t just something that’s happened—it also appears likely to continue in the foreseeable future. Ray Kurzweil notes that, although the current cycle has been driven by integrated circuits, it won’t end when we’ve exhausted their potential. Exponential progress has been surprisingly consistent from the first computers—with one technology picking up where the last one left off. (Kurzweil lists earlier computing technologies as electromechanical, relay, vacuum tube, and transistors.) As exponential growth continues, we can expect another billion-fold improvement in the coming decades. Second, computing’s exponential pace isn’t confined to the device in your pocket, lap, or desk. The power of digital information is infiltrating other fields and driving them at a similarly torrid pace.

The key to anticipating—if not precisely predicting—the future of technology is understanding these exponential curves. At first they double as small numbers (.01 to.02 to.04, etc.) and appear slow and linear.

This is deceptive. When the doubling hits one, two, and so on—it takes a mere 30 steps to reach a billion. And critically, half of all exponential growth happens in the last step. Anyone basing their predictions on an exponential trend will, by definition, look like a hack and a genius in short succession. Because whatever process they’re predicting will only be halfway to the level of progress predicted—and therefore still appear distant—at just the moment before it comes to fruition, which ultimately will prove them correct. Take a breath to appreciate what that means. To stay ahead of an exponential curve, you have to make plans that few of your peers will fathom until the very last moment.

It’s easy to see how much pressure and criticism that inevitably invites. It is small wonder that few people—even if they actually appreciate exponential trends—are able to not only employ this philosophy but stick by their convictions. Our brains and social structures are simply not built to appreciate acceleration. This is why, today, we are often skeptical of and surprised by technology. And Summit Europe was nothing if not a tour of the technologies that we’ll be most skeptical of and surprised by in the coming years. What are these? Artificial intelligence, computing and networks, robotics, 3D printing, genomics, and health and medicine.

It would be naive to say many of these fields have not been called revolutionary before now. It would be equally misguided to underestimate their power to do great things in the future. Because these fields, in one capacity or another, are hitched to exponentially growing computing power—they may look disappointingly linear (maybe for a long time) before becoming suddenly, precipitously surprising. Are we poised to wrest biology from nature? To develop machines with intelligence that rivals or outstrips our own?

To manipulate the material world on molecular scales? If such predictions sound outlandish—you have a human brain. But don’t let that blind you to the more general rule: As the world is increasingly digitized, many technologies you think belong to the distant future will arrive much sooner than expected.

Image Credit. Devops is one of those rare enterprise IT trends that began with a sentiment rather than a technology: Why can't developers and operations just get along? Developers demand constant reconfiguration from operations, while operations needs apps that are truly production-ready. A broad array of devops solutions has emerged to help both sides collaborate and do their jobs more effectively. In this article, InfoWorld and Network World have teamed up to examine devops from both an organizational and a technology perspective. Goals The specific goals of a DevOps approach include improved deployment frequency, which can lead to faster time to market, lower failure rate of new releases, shortened lead time between fixes, and faster mean time to recovery in the event of a new release crashing or otherwise disabling the current system. Simple processes become increasingly programmable and dynamic, using a DevOps approach, which aims to maximize the predictability, efficiency, security, and maintainability of operational processes.

Very often, automation supports this objective. DevOps integration targets product delivery,, feature development, and in order to improve reliability and security and provide faster development and deployment cycles. Many of the ideas (and people) involved in DevOps came from the and movements. DevOps aids in software application for an organization by standardizing development environments. Events can be more easily tracked as well as resolving documented process control and granular reporting issues.

Companies with release/deployment automation problems usually have existing automation but want to more flexibly manage and drive this automation — without needing to enter everything manually at the command-line. Ideally, this automation can be invoked by non-operations employees in specific non-production environments. The DevOps approach grants developers more control of the environment, giving infrastructure more application-centric understanding.

Role in continuous deployment Companies with very frequent releases may require a DevOps awareness or orientation program. Developed a DevOps approach to support a business requirement of ten deployments per day; this daily deployment cycle would be much higher at organizations producing multi-focus or multi-function applications. This is referred to as continuous deployment or and is frequently associated [ ] with the methodology., and have formed on the topic since 2009. History of the term 'DevOps' The term 'DevOps' was popularized through a series of 'DevOps Days' starting in 2009 in Belgium.

Since then, there have been DevOps Days conferences held in India, the US, Brazil, Australia, Germany, and Sweden. The term 'DevOps' started appearing online in the Spring of 2010.

Visual model. Development methodologies (such as ) that are adopted in a traditional organization with separate departments for Development, IT Operations and, development and activities, previously do not have deep cross-departmental integration with IT support or QA.

DevOps promotes a set of processes and methods for thinking about and collaboration between departments. Factors driving adoption The adoption of DevOps is being driven by factors such as: • Use of agile and other and methodologies • Demand for an increased rate of production releases from application and business unit • Wide availability of virtualized and from internal and external providers • Increased usage of automation and tools References. 3 Ways to Use Social Media to Recruit Better Tech Talent Is your company using social media to its full potential when it comes to finding new employees? Your competition probably is. By A whopping 93 percent of the 1,855 recruiting pros surveyed in use or plan to use social media in their recruiting efforts. The reason why is simple and powerful.

According to respondents, leveraging social media improves candidate quality by 44 percent over those using only 'traditional' recruiting techniques like phone screenings and filtering resumes based solely on skills and experience. Social media allows not only information about a candidate's experience and skills, but a better glimpse into their lifestyle, values and their cultural fit, which is crucial for companies looking not just to recruit and hire, but also to engage employees and improve retention rates. The Jobvite survey reveals that 80 percent of recruiters are using social media to evaluate a candidate's potential culture match. The emphasis on cultural fit is a major reason recruiters are doubling down on social media as a tool. Use Social Media to Evaluate Cultural Fit Social media's often used to highlight 'what not to do' from a candidate's perspective (take down those photos of your bachelor weekend in Vegas, please), but what's often overlooked is its usefulness to recruiters and hiring managers as both a sourcing and a screening tool for new talent especially when it comes to finding talent with that perfect cultural fit, says Yarden Tadmor, CEO and founder of anonymous job search and recruiting app. 'Traditionally, social media's importance to recruiting has been limited to the way it is used to weed out candidates who might be a bad fit -- in other words, those unprotected tweets can do serious damage when recruiters are evaluating potential employees.

But social media, whether staple networks like LinkedIn, Facebook and Twitter or burgeoning apps like our Switch has become a convenient and comprehensive way for recruiters to find, 'like' and connect with candidates,' says Tadmor. Filtering candidates through the lens of their Facebook profiles, Twitter feeds and other platforms helps determine whether prospects would fit the culture of a company and, perhaps more importantly, if they would be willing to consider a move, Tadmor says. 'The impact social media has had on our recruiting is immeasurable.

When we're on the fence about a candidate's resume, we use LinkedIn to find out how involved they are in the LinkedIn community and throughout the industry. This gives valuable insight that was previously unattainable, and are key ingredients of our prime candidates,' says Cristin Sturchio, global head of Talent. Sturchio adds that when using LinkedIn as a screening tool, she and her team look for candidates who've gained endorsements, who belong to professional groups and follow relevant companies and people. 'This tells us that they are engaged and active in their profession, and are likely to be engaged and active as one of our employees. You can't find that kind of information on a resume, and if you can, it often gets lost in more pressing details,' says Sturchio.

Use Social Media to Evangelize Your Business, Mission and Values From a recruiting perspective, having a well-defined social media brand can help attract the best passive candidates, says Tadmor. In fact, according to the Jobvite research, companies know they have to sell their workplace cultures not just to attract the right candidates but to influence their decisions about where to work, and attract like-minded talent. In addition, continued use of social media will help companies attract the next-generation workforce, as millennials continue to use social and mobile technology in their career efforts, according to David Hirsch, managing partner of. Hirsch says that social media is the ideal medium for employers to broadcast their social mission in order to attract high quality candidates. 'With mobile being the dominant way that millennials communicate and operate, we fully expect the way that companies will find new talent will continue trending toward more use of social media, as connections are made based on geo-location proximity, interests, passions, experiences, extended network, etc.,' says Hirsch. As more millennials enter the workforce, Hirsch says apps like Switch will become more important for both employers and employees, allowing them to quickly sift through the 'noise' and find their perfect 'match' in a way that's more in line with how millennials will expect to experience their job searches and how recruiters should target prospects.

Use Social Media to Advertise Open Positions Of course, recruiters and hiring managers are still using social media in a more traditional way, to post open positions or as a platform to reach broader segments of their industry in hopes of luring potential employees, says Seven Step RPO (Recruitment Process Outsourcing) president Paul Harty. 'A company might be using Facebook or Twitter to broadcast targeted industry related news for mechanical engineers. It also may have recruiters posting about mechanical engineers for those interested in related jobs, and then advertising those jobs through that commentary. That targeted outreach and profiling happens more than you think. Companies are finding people by Tweeting or posting a Facebook page to find the skills they are looking to acquire, regardless of position. However and wherever recruiters can find talent, they will leverage those channels to be where the talent community exists,' says Harty. Cognolink's Sturchio highlights that his organization also uses social media for job postings.

'We also use Twitter to blast out our recruiting activities on campus, which allows us to find new candidates, promote our brand, and draw interest and awareness to find talent,' says Sturchio. 4 Ways Your Competitors Are Stealing Your IT Talent Savvy companies are shopping for talent in what is arguably the best place to find it -- their competition. As the talent war heats up, poaching tech professionals is becoming increasingly common. Here's how it's done and how to stop it. By One of the best places for your competitors to find great talent is within the walls of your company.

If your best and brightest have been jumping ship to work for your biggest rival, it's important to know how they're being recruited, why they are being targeted and what you can do to stop it. Here's how your competitors may be poaching your talent. They're Using Professional Search Tactics Savvy companies know that the best talent is often already employed - with their competitors. Hiring a professional search firm -- or if that's not financially feasible, copying their subtle approach -- can lure away even the most content employees. As this article points out, targeting successful talent and then making contact via social networks like Facebook or LinkedIn, or at professional networking events, conferences or industry events with the promise of a 'great opportunity' can pique their interest and entice them to consider a move.

They're Using Tools Like Poachable or Switch One of the biggest challenges for hiring managers and recruiters is finding passive candidates, says Tom Leung, founder and CEO of anonymous career matchmaking service. 'Passive job finding - and searching for passive candidates - has a lot of interest for both candidates and for hiring managers and recruiters. As the economy rebounds and the technology market booms it remains difficult to match potential candidates with key open positions,' Leung says. Employees and candidates are demanding higher pay from potential employers while, at the same time, STEM jobs are taking twice as long to fill as non-STEM jobs.

'When we asked hiring managers and recruiters what their biggest challenge was, they told us their weak spot was luring great talent that was already employed. Everybody seems to be doing a decent job of blasting out job postings, targeting active candidates, interviewing them, but this passive recruiting is where people get stuck,' says Leung. Passive candidates are already employed and aren't necessarily unhappy, Leung says, but if the right opportunity came up, they would consider making a move.

That's where tools like Lueng's Poachable and the new Switch solution come in. 'These folks might want to make a move, but they're too busy to check the job boards every day, and they're content where they are. What we do is help them discover what types of better, more fulfilling jobs are out there by asking them what 'dream job' would be tempting enough for them to move, and we help them find that,' says Leung. Are You Offering Competitive Benefits and Perks Flexible work schedules, job-sharing, opportunities to work remotely, subsidized child and elder care, employee-paid healthcare packages, on-site gym facilities, a masseuse and unlimited vacation time are all important if you want to attract talented IT professionals. 'Companies that acknowledge and accommodate the fact that their talent has a life separate from work tend to have more engaged, loyal and productive employees, says Dice.com president Shravan Goli. A surveyed tech pros and found benefits and perks like flexibility, free food and the ability to work with cutting-edge technology were key drivers of their decision to take a new position.

'With approximately 2.9 percent unemployment rate in the IT industry, companies must get creative to attract and keep their top talent. Perks and benefits are one way they are looking beyond compensation,' says Goli. Offering Better Monetary Incentives Your talent is one of your business' greatest assets, and if you're not doing everything you can to ensure they stay happy, especially where compensation is concerned, you could lose them - and be at a competitive disadvantage, according to the. 'All companies have valued employees - those they can't afford to lose because of their skill, experience and commitment to their work. One way you can help them resist the temptation to stray is to show that you are invested in their future,' according to data from the SBA. The SBA advises giving these employees one-on-one time with management, discussing their professional goals and their importance, and sharing the company's vision for continued growth as well as the employee's role in that growth.

In addition, the SBA says, offering meaningful pay increases, a generous bonus structure and/or compensation like 'long-term incentive plans tied to the overall success of the business, not just individual performance, can also send a clear message to your employees that they have a recognized and valuable role to play in your business as a whole.' Short Film “Memories 2.0″ Envisions Reliving the Past Through Virtual Reality BY One of the hard truths of human existence is that though we are able to move freely through space, we are mercilessly constrained by time.

Each moment of life arrives then rapidly passes, seemingly lost forever. In an attempt to capture information from these moments as they flow past, our brains record memories, but they are limited by what is perceived and stored on a device that is organic and fragile. Drawing on concepts of technology, memory, and lost relationships explored in others films such as Eternal Sunshine of the Spotless Mind, the short film Memories 2.0 explores the use of virtual reality to recapture moments of love lost forever. Whether virtual reality and neuroscience will converge in the future to produce technology that will enable the reliving of memories, science fiction films love to delve into technology’s affect on the mind. Consider movies such as Brainstorm, Until the End of the World, and The Matrix that all explore the mental strain anticipated when bridging the physical world and virtual reality. Each depicts how technology will empower us, but at a price.

Coexisting in a world full of constraints and one that seems limitless will have an impact on our identity and relationships. Yet even with today’s technology, we are increasingly existing in two realities concurrently or, put another way, the new reality is hybridized. Digital images and video allow us to capture and relive memories in detail, and social media and cloud technology now permit vicarious reliving of moments from other people’s lives with ease.

Virtual reality will only make these delvings much more immersive, and underscores the justification for. It raises the question, How are our minds already being affected by this divide?

Memories 2.0 doesn’t offer any answers but simply a glimpse at the life of a protagonist attempting to regain a part of himself through technology. In the future, all of us may end up in his shoes. How Bad Software Leads to Bad Science Software that can crunch data faster than any researcher is as much a part of science these days as petri dishes. Researchers are even designing their own bespoke programs, but not every scientist is a programmer, and bad software produces bad science.

A new survey of 417 randomly selected UK researchers, (SSI), reports that 70 percent of respondents believe they could not practically continue their research without the aid of software. 56 percent of respondents design their own software, and 20 percent of those scientists do so without any training in software engineering whatsoever. “It’s a terrible concern, because you can work your way through software development—researchers are intelligent people, they can work this stuff out—but you can’t build software that is reliable,” Simon Hettrick, deputy director of the SSI, told me.

“If you’re producing your results through software, and your software doesn’t produce reproducible results, then your research results aren’t reproducible.”. Screenshot: Software Sustainability Institute Bespoke software is used at nearly all levels of science, Hettrick told me. Something as simple as generating a graph might need a specialized program, all the way up to for scientists to dig through and make connections.​ Problems arise when that software is designed by researchers who really don’t know what they’re doing when it comes to coding. A single mistake in the code can lead to a result that appears innocuous enough, but is actually incorrect. In 2006, due to an error produced by a homemade piece of software. The research group, led by Geoffrey Chang, thought they had identified a new protein structure, but their discovery was only made possible through a mix-up in their bespoke data analysis program that effectively inverted their results.

Poorly designed software being passed between research groups and used uncritically is also a concern. A 2013 study on the use of modeling software among researchers who study species distributions. Many of these programs were recommended by colleagues or picked up by word-of-mouth. 'SOFTWARE LETS YOU DO SO MUCH MORE IN THE AMOUNT OF TIME THAT WOULD ALLOW THE AVERAGE HUMAN TO CONDUCT AN ENTIRE RESEARCH CAREER' The use of powerful software across nearly all areas of research is poised to only increase in the near future. The ability to crunch more numbers, more accurately, and with greater speed, is a boon to researchers who can spend more of their time thinking creatively about the data their work produces.

“Software lets you do so much more in the amount of time that would allow the average human to conduct an entire research career; it’s about empowering researchers to do more,” Hettrick told me. There’s no doubt that awareness about the use of poorly designed software has risen in the scientific community as a result of these studies and professional horror shows. Yet, the recent SSI survey indicates that some researchers—potentially a non-trivial number—are still relatively ignorant regarding their own digital tools. “Training for researchers is very important,” Hettrick said. “We think that software training, a basic level of software engineering and development, should be in all doctoral schools so that they’re producing a research community with a basic understanding of how software works.”. In the Digital Enterprise Everyone Needs to Think SEO By Since the dawn of the Web when Google first learned to crawl, search engine optimization has been a key to Internet success, but today it's more important that ever.

The Home Depot’s SEO manager talks about how SEO now ‘stretches through everything.’ The online world has spawned a virtual content creation and aggregation boom. Digital marketers flood online channels with YouTube how-to videos, Instagram photos, Tweets, Facebook posts, Web pages, graphics, blogs and more. In turn, consumers rely on Google search to help them sift through the rubble and find nuggets of useful information. Getting messages to rise to the top of the search page is the job of search engine optimization (SEO).

[ Related: ] While SEO practices have been around forever -- at least since the dawn of Google -- they've become an increasingly important skill for digital marketers. These days just about every company is a digital publisher, creator of promotional content and a content marketer, and so every company needs to become an SEO expert as well. 'Content, in any form, is really the orange apron of Home Depot,' says Sean Kainec, senior manager of SEO at The Home Depot. 'Content explosion might not be the right words, but there's definitely an intent on focus.' Why Owning the Letter ‘H’ Is Key Kainec's SEO team has helped The Home Depot capture the letter 'H' in Google search. That is, when consumers type 'H' in the Google search bar, The Home Depot pops up as a suggestion. While this might not seem like much, its impact can be tremendous, leading to greater brand recognition, higher number of website visits and, ultimately, more sales.

But capturing the letter 'H' hasn't been easy. Kainec has spent nearly a decade in the field of SEO, where he's seen search engines change and their importance grow rapidly.

In the old days, people would type in keywords. Today, they verbally ask questions. The search engine has evolved into a kind of semantics engine, and SEO has had to evolve with it. [ Related: ] 'With SEO, you really have to think like a customer, really dig into what the customer is asking Home Depot to be,' Kainec says. 'If your site isn't deemed an authority, you won't show up. So it's about crafting Home Depot as the authority for home improvement.'

Kainec also has to stay on top of the search game, which isn't easy given Google’s constant updates its search engine algorithm. In turn, site owners may need to tweak their content strategy or make wholesale changes to the way their site is built or put an end to shady practices such as link farms.

These algorithm updates can be quite secretive and complex despite the fact that they're given simple animal names. 'We're tracking Google changes -- your penguins, pandas and hummingbirds,' Kainec says. 'They make us feel like zookeepers.' [Related: ] SEO can be hard for business executives to understand, Kainec says. And so The Home Depot uses, an SEO reporting tool, to keep them in the loop. The Brightedge dashboard shows a company's keyword ranking, competitors' keyword ranking, competing pages, correlations, impact over time, and other analytical information. Are You in an SEO State of Mind?

Getting a handle on SEO will become more and more important with content expanding beyond a company's homepage and into social networks. As the digital world merges with the physical one, such as consumers tweeting while watching a television commercial, greater emphasis will be put on search as consumers look for the hot discussion or topic of the moment. 'My personal opinion is that SEO stretches through everything, whether that's social, UI [user interface], UX [user experience], content marketing, general marketing,' Kainec says.

'SEO is not a job title; it's a frame of mind.' Three ways virtualization increases IP address consumption by: Virtual environments use at least twice as a many IP addresses as physical ones because each desktop and endpoint used to access it need their own addresses.

Luckily, IPAM tools can help you keep track of your addresses. IP address consumption doubles when you deploy virtual desktops, so it's important that IP address management is on your radar. When an organization begins working toward implementing VDI, it has a lot of things to consider: Is the storage connectivity fast enough? Do the host servers have enough memory?

Will the end-user experience be acceptable? These are all important questions, but one aspect of the preparation process that is sometimes overlooked is the affect that desktop virtualization will have on IP address consumption. How virtualization consumes IP addresses There are three primary ways that desktop virtualization affects IP address consumption. The first has to do with changes that you may need to make to your configuration. Depending on how many virtual desktops you want to support, you may need to. You might even need to deploy some extra. This certainly isn't necessary in every situation, but it happens often enough to make it worth mentioning.

The second way consumption becomes a factor is that the organization may suddenly consume far more IP addresses than it did prior to the desktop virtualization implementation. The reason for this is quite simple. Consider an environment without virtual desktops.

Each PC consumes an IP address, as do any backend servers. Shops implementing virtual desktops or VDI sometimes overlook the fact that desktop virtualization does not eliminate desktop hardware needs. Regardless of whether users connect via tablets, thin client devices or repurposed PCs, the endpoint consumes an IP address, and so does each virtual desktop.

This means that desktop virtualization effectively doubles IP address consumption on the client side. Each user consumes at least two IP addresses: The physical hardware uses one address and the virtual desktop uses another. There is no way to get around this requirement, so you must ensure that an adequate number of IP addresses are available to support virtual desktops and endpoints. The third reason IP address consumption increases in a virtual desktop environment has to do with the way workers use virtual desktops. Employees can use virtual desktops on a wide variety of devices, such as PCs, smartphones and tablets.

This gives workers the freedom to use the device that makes the most sense in a given situation. But IP address consumption does not mirror device use in real time. When a device connects to the network, a lease, but the lease isn't revoked when the device disconnects from the network. The lease remains in effect for a predetermined length of time, regardless of whether the device is still being used. As such, the IP address is only available to the device that leased it; it's not available for other devices to access during the lease period. Desktop virtualization by its very nature leads to increased IP address consumption.

The actual degree to which the IP addresses are consumed varies depending on device usage, however. From a desktop standpoint, you can expect the IP address consumption to double, but in organizations where workers use multiple devices, consumption can be even higher. How to protect the network against increased IP consumption The first thing I recommend doing is implementing session limits. Remember, every virtual desktop that is powered up consumes an IP address.

You can establish some degree of control over the IP address consumption by limiting the number of concurrent sessions that users are allowed to establish. If each user is only allowed to have one or two concurrent sessions, then you will consume fewer IP addresses (not to mention fewer host resources) than you would if each user could launch an unlimited number of virtual desktops. I also recommend adopting an automated IP address management tool. There are a number of third-party options on the market.

Windows Server 2012 and 2012 R2 also included IP address management software in the Microsoft feature. Like any other form of resource consumption, IP address usage tends to evolve over time. To that end, it is extremely important to track IP address usage over the long term so you can project if or when your IP address pools are in danger of depletion. An IP address management tool should also include an alerting mechanism that responds to situations where a DHCP pool; the depletion of a DHCP scope can result in a service outage for some users. Using an automated software application to track scope usage is the best way to make sure that you are never caught off guard. More: • • • • • • • • • • • • • • Link.

Can AI save us from AI? BY Nick Bostrom’s book might just be the most debated technology book of the year. Since its release, big names in tech and science, including and, have warned of the dangers of artificial intelligence. Bostrom says that while we don’t know exactly when artificial intelligence will rival human intelligence, many experts believe there is a good chance it will happen at some point during the 21 st century.

He suggests that when AI reaches a human level of intelligence, it may very rapidly move past humans as it takes over its own development. The concept has long been discussed and is often described as an “intelligence explosion”—a term coined by computer scientist IJ Good fifty years ago. Good described the process like this: “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man, however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.” Bostrom says that once this happens, if we aren’t prepared, superintelligent AI might wipe us out as it acts to achieve its goals. He draws the analogy to humans redeveloping various ecosystems and, in the process, causing animal extinctions.

“If we think about what we are doing to various animal species, it’s not so much that we hate them,” Bostrom told. “For the most part, it’s just that we have other uses for their habitats, and they get wiped out as a side effect.” In one scenario Bostrom outlines, an AI programmed to make as many paper clips as possible might move against humans as it calculates how likely we are to turn it off. Or it might view us as a source of atoms for more paper clips. Broader and seemingly beneficial goal setting might backfire too. For example, a machine with the goal of making humans happy might decide the best way to do this is by implanting electrodes in our brains’ pleasure centers—this “solves” the problem, but undoubtedly not to the liking of most implantees. How then can we reap the vast problem-solving powers of superintelligent AI while avoiding the risks it poses? One way might be to develop artificial intelligence in a “sandbox” environment, limiting its abilities by keeping it disconnected from other computers or the internet.

But Bostrom thinks a superintelligent AI might easily get around such controls—even perhaps, by being on its best behavior to fool its handlers into believing it’s ready for the real world. Instead, according to Bostrom, we should focus on the AI’s motivations. This is, as outlined before, a very tricky problem. Not least because human values change over time. In short, we aren’t smart enough to train a superintelligent AI—but it is. Bostrom suggests we program a superintelligent AI to figure out what we would have asked it to do if we had millennia to ponder the question, knew more than we do now, and were smarter. “The idea is to leverage the superintelligence’s intelligence, to rely on its estimates of what we would have instructed it to do,” Bostrom suggests.

(Check out this for a good synopsis of Bostrom’s argument.) Why think about all this in such detail now? According to Bostrom, while the risk is huge, so is the payoff. “All the technologies you can imagine humans developing in the fullness of time, if we had had 10,000 years to work on it, could happen very soon after superintelligence is developed because the research would then be done by the superintelligence, which would be operating at digital rather than biological timescales.” So, what do you think? Can artificial intelligence save us from artificial intelligence? Image Credit. Intel's IoT vision sees far more than chips By Intel is bringing all its assets to bear on the Internet of Things, a hot topic for nearly all IT vendors but one that's especially critical to big chip makers. While Intel would like to see its low-power chips used in sensors, wearables and other hardware that will ship in huge numbers if the industry's IoT dreams come true, it also has software, security and infrastructure to add to the mix. In the short run, those may matter more than the silicon itself.

At an event in San Francisco on Tuesday, the company announced what it calls the Intel IoT Platform, a combination of hardware, software and partnerships designed to help its customers quickly churn out complete systems. Intel also introduced its latest IoT gateway design, plus security and management capabilities that will be part of that platform. 'It really is an end to end play,' said Doug Fisher, vice president and general manager of the Intel Software and Services Group. A key part of Intel's strategy for IoT is its gateway reference designs, which can collect data from sensors and other IoT devices at the edge of a network and process and translate that data. The gateways can even turn machines that have never been networked into connected devices, translating older proprietary protocols into usable streams of data on IP (Internet Protocol) networks. On Tuesday, Intel introduced the Wind River Edge Management System, a technology stack for cloud-based control of IoT operations.

It also rolled out a new generation of the Intel IoT Gateway with the Wind River software, which will allow enterprises to quickly deploy gateways and manage them for as long as they are in use. The company also laid out a list of partners for building and deploying IoT systems in various industries. Those partners include Accenture, Capgemini, SAP, Dell and Japan's NTT Data. While Intel may someday ship millions more chips thanks to IoT, depending on how it fares against rivals using the ARM architecture, its end-to-end set of technologies doesn't really exclude chips from other vendors, Gartner analyst Mark Hung said.

In other words, Intel's data center and security assets can play a role in deployments where sensors and other components may come from elsewhere. In the short term, in fact, software and security may be Intel's biggest IoT plays when it comes to bringing in revenue, he said.

Enterprises may be interested in single-vendor, end-to-end IoT solutions for now, because they want to get the ball rolling on IoT, Hung said. But in the long run, they'll look for combinations of 'best of breed' components, a strategy that's not feasible yet because standards haven't solidified enough to ensure all the parts will work together, he said. Intel's McAfee security business introduced Enhanced Security for Intel IoT Gateways, a pre-validated solution to enhance the security of the gateways. And to serve industries that are linking older equipment to the Internet for the first time, the company is working with Siemens to add support for industrial protocols to its firewall technology. There's a window of two to five years to implement security in IoT, said Lorie Wigle, Intel's vice president of IoT Security Solutions. 'It's really critical that we build security in, particularly when we look at industrial IoT. Some of these systems may be in place for decades, so if we miss this window of opportunity, it is a big, big miss,' Wigle said.

Intel's taking one security technology it's developed for its own products, called EPID (Enhanced Privacy Identity), and promoting it to other silicon vendors. EPID separates a device's ability to prove that it's a certain class of device from its ability to prove that it's a unique, specific device. Each device has its own key, but there's a single key on the other side used to validate them. One place that may be useful is in vehicles, where a car could be authorized to use shared infrastructure such as tollbooths and smart traffic lights without identifying itself as your car in particular, Wigle said. That would keep the entities that run those systems from being able to track you wherever you drive. Stephen Lawson covers mobile, storage and networking technologies for The IDG News Service.

Follow Stephen on Twitter. Stephen's e-mail address is Link. Cyberextortion Posted by Cyberextortion is a crime involving an attack or threat of attack against an enterprise, coupled with a demand for money to avert or stop the attack. Cyberextortion can take many forms. Originally, denial of service () attacks were the most common method. In recent years, cybercriminals have developed that can be used to encrypt the victim's data. The attacker then demands money for the decryption key.

As the number of enterprises that rely on the Internet for their business has increased, opportunities for cyberextortionists have exploded. The probability of identification, arrest, and prosecution is low because cyberextortionists usually operate from countries other than those of their victims and use anonymous accounts and fake e-mail addresses. Cyberextortion can be lucrative, netting attackers millions of dollars annually. A typical attack may result in a demand for thousands of U.S. Payment does not guarantee that further attacks will not occur, either by the same group of cyberextortionists or by another group. Continue Reading About cyberextortion • Glossary 'Cyberextortion' is part of the: • • Link.

Google Compute Engine Posted by Google Compute Engine is an Infrastructure as a Service ( ) offering that allows clients to run workloads on. The Compute Engine provides a scalable number of virtual machines ( ) to serve as large for that purpose. GCE can be managed through a, command line interface () or Web console.

GCE's application program interface () provides administrators with virtual machine, and capabilities. VMs are available in a number of and configurations and distributions, including and CentOS. Customers may use their own system images for custom virtual machines. Data at rest is encrypted using the AEC-128-CBC algorithm. GCE’s scalable number of allowed instances makes it possible for an administrator to create clusters with thousands of virtual CPUs. GCE allows administrators to select the region and zone where certain data resources will be stored and used. Currently, GCE has three regions: United States, Europe and Asia.

Each region has two availability zones and each zone supports either Ivy Bridge or Sandy Bridge processors. GCE also offers a suite of tools for administrators to create advanced networks on the regional level. GCE instances must be within a network to ensure that only instances within the same network can see each other by default. Compute Engine is a pay-per-usage service with a 10-minute minimum. There are no up-front fees or time-period commitments. GCE competes with Amazon's Elastic Compute Cloud (EC2) and Microsoft Azure.

Coming Data Deluge Means You’ll Know Anything You Want, Anytime, Anywhere BY We’re heading towards a world of perfect knowledge. Soon you’ll be able to know anything you want, anytime, anywhere, and query that data for answers and insights. Why is this happening? And what are the implications?

These are the questions this blog will explore. An explosion of ubiquitous, omnipresent cameras The first digital camera built by Kodak in 1976 was a 0.01 Megapixel camera. It was the size of a toaster and cost thousands of dollars. Today’s digital cameras are 1 billion times better.

In a decade, they will be 1 trillion times better. Where is ubiquitous imaging/sensing heading? • Imaging from our streets: Fleets of autonomous cars will image everything in and around our roads, constantly. A single Google Autonomous car using LiDAR (laser imaging radar) generates over 1.3 million points per second (750 Mbits/sec of data) in a “360° view” (see image below).

• Imaging from space: Today there are three private orbital satellite constellations with two more being planned soon. These near-real time imaging services from space are offering 0.5 meter to 5 meter resolution of any spot on the planet, with video and multi-spectral options. • Imaging from our skies: Beyond orbiting satellites, we will soon have armies of drones flying above our streets imaging the ground at centimeter resolution. • Imagine from our sidewalks: Whatever Google Glass becomes, we’ll see a future where people walk around with always-on, active cameras that image everything on our streets, at millimeter resolution. NOTE: These are examples just from the realm of ubiquitous imaging sensors. Beyond this, there will be an explosion of audio/vibration, genomics and biometrics sensors, to name just a few.

In the decade ahead, we’re heading towards a trillion-sensor world. In 2013, we generated 4 zettabytes (4x1021bytes) of data.

Data generation is doubling every two years and accelerating. By 2020 we’ll be up to 44 zettabytes (i.e. 44 trillion gigabytes). Then, with the power of machine learning, data science, increased computational power, and global connectivity we can process, learn from, explore, and leverage that information to ask and answer almost any question. Questions we will be able to ask, and get answered Who caused that accident? While autonomous cars are unlikely to crash (bad news for the insurance industry), accidents caused by human-driven cars on the road will never be mysteries again. Imagery from LIDAR or equivalent sensors will tell you exactly who caused the accident and how.

How’s my competitor performing? Orbital satellite imaging can tell you exactly how many cars were in competitor’s parking lot last weekend. Which locations attract more shoppers? What is the status of your competitor’s supply chain – raw materials in, and finished products out? Where did that gun-shot come from? ShotSpotter, a gunfire detection technology gathers data from a network of acoustic sensors placed throughout a city, filters the data through an algorithm to isolate the sound of gunfire, triangulates the location within about ten feet, then reports it directly to the police. It’s more accurate than info from 911 callers. What is the most popular dress color Friday night in Manhattan? Want to know the fashion trends in your city?

You will be able to gather images and mine them to determine the most popular colors and fashions on any street, mall or borough. What is the prevalence of heart disease or Alzheimer’s in my neighborhood? This may sound disgusting, but imagine sampling the sewage coming out of your neighborhood. By analyzing the DNA in biological waste in those pipes, you can tell the prevalence of one disease over another in that community. Do you think that might be of interest to a health or life insurance company? Who are the happiest people in the U.S.? Researchers from the University of Vermont used Mechanical Turk to rank thousands of words by “happiness” levels. They then wrote an algorithm that analyzed 10 million tweets, used the Mechanical Turk data as a training set, and determined which are the happiest U.S. States (Hawaii) and the saddest (Louisiana).

They could even explore semantic trends down to the zip-code. Consequences for the Entrepreneur/CEO I think about this stuff a lot. We live in the most exciting time ever.

As we move towards a world of perfect information, we are going to be disrupting many industries and creating even more entrepreneurial business opportunities. Which industries are going to change because of this data revolution? [ image credit: courtesy of Shutterstock]. Connected Enterprise: A Managed Approach to Leveraging IoT Insights Of 779 senior business leaders surveyed globally, 61-percent said that companies that are slow to integrate IoT into their business will fall behind their competition, according to data collected June 2013 by the Economist Intelligence Unit (EIU, see accompanying figure).

Fortynine-percent of EIU survey participants were C-levels and board members. Yet, according to data from a Deloitte analysis of 89 IoT implementations deployed between 2009 and 2013, only 13-percent of the IoT use cases Deloitte studied targeted revenue growth or innovation as a main objective. Conclusion: The successful enterprise will focus on significant IoT innovations and profits. Please read the attached whitepaper. Scaling SaaS Delivery for Long-Term Success Critical Cost and Performance Considerations to Ensure Sustainable & Profitable Service Delivery Adoption of Software-as-a-Service (SaaS) solutions is accelerating. Every software industry survey and market forecast says so. This is because SaaS applications are almost always more flexible, versatile, and cost-effective than traditional on-premises solutions.

This trend represents a great opportunity for established SaaS companies to expand their customer base. But it also presents challenges for SaaS providers seeking to gain a competitive advantage and to build sustainable businesses over the long-haul. Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) resources make it easier and more economical for companies to launch SaaS businesses. They eliminate many service delivery and software development barriers to entry. This, in turn, has opened the door to a proliferation of players vying for market share.

This ‘Cloud Rush’ affect is commoditizing many segments of the SaaS marketplace. Competitors are often willing to undercut each other on price to win over customers. At the same time SaaS has made it much easier for customers to switch solutions if they experience availability or performance issues. Savvy SaaS executives recognize that their service delivery requirements increase as their business grows and the demands of their customers expand. Many leading SaaS companies leverage a combination of public and private cloud resources to meet the varying needs of their growing customer base. THINK strategies believes it is critical for SaaS companies to team with leading data center providers that offer multi-cloud capabilities to meet their mission critical service delivery needs.

In particular, they should partner with providers that offer a variety of public cloud and hosted service alternatives on a global scale to ensure the availability and performance of their SaaS solutions.This whitepaper discusses these imperatives, and provides THINKstrategies’ recommendations for SaaS executives looking for the best service delivery alternatives to capitalize on today’s market opportunities. Please read the attached whitepaper. To clarify this ambiguous statement, we’re wading into a future where we will require more precise definitions to discuss increasingly complicated, complex and more finely nuanced objects, situations and roles people have in the world. However it unfolds, it’s a good bet that it will involve things for which we don’t yet have good names (see “,” “,” and “” if you have any questions). Catch-all terms, particularly when applied to emerging phenomena, do us more harm than good, and we need to find better options to communicate about them if we’re going to understand what comes next.

The sphere of emerging technology is probably where the definition gaps currently yawn widest. Let’s take the term “hacker,” not new per se, but one which has seen a lot of action in the past year as the pace of attacks on networks, databases and infrastructure has appeared to accelerate. It’s certainly been well abused during. In, Eric Raymond reminds us that “hacker” was originally used in the early 1960s to refer to amateur computer enthusiasts, people who tinkered and built hardware and software. It applied to a fast-changing and diversifying set of subcultures around programming and computing.

As such, it would probably cover many more dedicated internet users today — people who have taken it upon themselves to learn the tools of the (now) network. Under current definitions, only a small minority of us would comfortably claim the title, because “hacker” has been largely criminalized in its most common usage. It is now bandied about to refer to anyone carrying out activities on or around computers or networks that goes against the interests of businesses, governments, or powerful individuals, not simply clearly criminal attacks or nefarious activity. (Not to mention the fact that the term hack has also been co-opted to refer to tips for productivity and efficiency in all areas of life.). We don’t distinguish among “hacking” behaviors now — everything that’s done in any way to harm, compromise, gain unauthorized access to, probe, or monitor without knowledge is considered a hack or hacking. And media outlets obligingly stretch the definition as wide as possible for short headlines and shallow stories. We don’t ask about function, motive, provenance, authority, or any other detail. Hacking is coding is is theft is an intelligence operation is a malware insertion is a leak is a practical workaround.

File it all over there, in the menacing box with the skull and crossbones on it. Without knowing what the nature or context of an act is, it’s far too easy to sweep all things that seem similar under one very large rug and leave it there. But in doing so, we stand to learn absolutely nothing about it or from it, and are no more prepared to deal with similar issues the next time they impact us. Security expert Bruce Schneier. We don’t really know anything firm about the incident, or who was behind it, so simply running around shouting “hackers!” doesn’t tell us much. Who executed the attack, why and how are all important data points in making future strategic adjustments. They’re everywhere.

(Who’s “they”?) As a fellow observer of near futures, similar things are happening with words like “robots,” “algorithms,” and “drones.” We casually use them as shorthand, but (increasingly) there are worlds of difference between, say, an industrial robot on a production line and a telepresence unit on wheels, of the kind. “Robot” used to mean a humanoid machine capable of executing commands. Yet, advances in engineering mean the machines we task to do things for us, and only a minority look anything like us. So when a headline shouts “Are Robots Stealing Our Jobs?” one has to ask, “What job, performed how and by whom?” to get closer to a meaningful understanding of what a robot could be here. Likewise, with the word “drone.” I think you’d know the difference between a fully armed Reaper drone locked on your location and a cheap palm-sized toy buzzing around you, at least for a few meaningful seconds. Even the “drone” industry is.

Part of this search for a better label is for marketing clarity, part of it is defense against negative attention. The term has already become quite sticky, as has negative attention around drones, so differentiating names by function, or throwing in qualifiers (toy drone, military drone, farming drone) is tough. Yet, unlike what has happened with “hackers,” as time goes on, we’ll probably see more fine-tuned language around drones, because unlike with “hackers,” we can stratify a good deal of what’s going on with drones in our daily lives, and we’ll need names to refer to different activities so we don’t accidentally call in a Hellfire missile strike when we just want an orchard irrigated or a package delivered. In the dark But “hackers,” “algorithms,” and to some extent “robots,” sit behind metaphorical — or actual — closed doors, where obscurity can benefit those who would like to use these terms, or exercise the realities behind them to their own benefit, though perhaps not to ours. We need better definitions, and more exact words, to talk about these things because, frankly, these particular examples are part of a larger landscape of “actors” which will define how we live in coming years, alongside other ambiguous terms like “terrorist,” or “immigrant,” about which clear discourse will only become more important.

Language is power—power that often implies, or closes down knowledge and understanding, both of which we need to make informed decisions about individual and collective futures. Everyone doesn’t need to become a technical expert, or keep a field guide to drones and robots handy (though it might be useful sooner than later), but, as I’ve pointed out in the case of, we might all benefit from having a clearer understanding of how the world is changing around us, and what new creatures we’ll encounter out there. Perhaps it’s time we all start wielding language with greater clarity. I’m sure the robots will. Global Citizenship: Technology Is Rapidly Dissolving National Borders BY Besides your passport, what really defines your nationality these days? Is it where you were live? Where you work?

The language you speak? The currency you use?

If it is, then we may see the idea of “nationality” quickly dissolve in the decades ahead. Language, currency and residency are rapidly being disrupted and dematerialized by technology. Where you live, where you work Increasingly, technological developments will allow us to live and work almost anywhere on the planet (and even beyond). Soon, you’ll be able to live in the Greek Islands and work in Manhattan, London, and Los Angeles. Telepresence & Virtual Environments Today I use telepresence robots to telecommute around the globe, attend an XPRIZE meeting in India, or if I’m overseas, pop home for breakfast or dinner with my kids. The product I personally use comes from Suitable Technology and is called the “Beam.” I have about 15 beams across my different companies, and I’ll be integrating another 20 beams into my Abundance 360 Summit. Beyond these telepresence technologies, the biggest impact on dematerializing nationality will come from development of fully immersive, high fidelity, virtual worlds.

Virtual workplaces you plug into using VR gear to interact with other virtual workers (perhaps even A.I.’s) on a daily basis. The earliest example of a virtual world where people were “living” and “working” is Phillip Rosedale’s Second Life. You can think of it as a proof of concept, an early prototype of what is coming. Think of it as pong, compared to today’s video games. Even as rudimentary as Second Life is today, its annual revenues have reached US$567 million, and since its inception, it has transacted over US$3.5 Billion as people build and sell virtual products in this virtual world. Not bad But what is coming next will be transformational. With the creation of new VR technologies (Oculus Rift technology, Samsung Gear) and 360-degree camera technology (Immersive Media, Jaunt), we’ll be able to slide on a pair of Goggles and “go” anywhere in the real and virtual world.

Companies will forgo bricks and mortar, and instead allow its work-force, from around the world, to beam into the same environment and work cooperatively. Think about a ‘kinder-gentler ‘ version of The Matrix. What language you speak We are headed toward a world where everyone will have the tools to speak every language, in real time. Right now, Google Translate does a damn good job.

The system built by Franz Och at Google over the last decade can now support translation between 80 language pairs. (Note: Franz is now heading machine learning at Human Longevity Inc, where he is helping to translate between the languages of genetics, phenotype and metabolome.) In 2013, Google stated that Translate served 200 million people daily. Another more recent example of simultaneous translation, this time between spoken word, is Skype’s recently announced “Live Translate.” Skype’s embedded artificial intelligence promises to translate your voice into another language in close to real-time while you are video-Skyping someone else on the other side of the planet (right now, it only serves English/Spanish translations).

The bottom line: Star Trek universal translator is here and it’s going to be a game-changer. What currency you use Decentralized, unregulated cryptocurrencies (like bitcoin) will make it MUCH easier to trade and transact both across and within borders. While this year hasn’t been so great for bitcoin, the fact is, cryptocurrencies are here to stay and will find more and more useful applications. Take the recent Russian ruble crisis for example. In Q4 2014, the ruble had a rapid devaluation due to political instability and the crashing price of oil, ending up at a 14-year low. So what happened?

Russians have started pouring money into bitcoin. In mid-December 2014, CNBC reported, “Transaction volumes between the ruble and digital currency bitcoin enjoyed their biggest day of the year. This was close to a 250 percent increase in transactions.” Bitcoin dematerializes the banks, and demonetizes transaction fees. It is global and unregulated. And it is easy to use.

With these characteristics, we will see a shift away from national currencies toward global cryptocurrencies that provide some level of stability and independence from your country’s political turmoil, or whether your country’s GDP is based on oil exports. Consequences for the Entrepreneur/CEO We live in the most exciting time ever. In my mind, a borderless world will be a more creative, lucrative, healthy, and frankly, exciting one.

Especially for entrepreneurs. A world where you, as an entrepreneur can now become a ‘multinational corporation’, accessing 5 billion potential customers.

[ image credit: Suitable Technologies]. Prep for continuous delivery with iterative development by Are you ready for continuous delivery? This article explains how an established, iterative development practice puts you on the right path. If you aren't practicing iterative development, you aren't ready for continuous delivery.

If you aren't doing, you aren't ready for continuous delivery. If you don't have a quick way to see what happens to code when it's deployed on your infrastructure, you aren't ready for. If you don't have a staging environment for review before production release, you aren't ready for continuous delivery. If the business you work for hasn't made a commitment to continuous delivery, you cannot reap the full benefit of ongoing, rapid software releases.

'Continuous delivery is the sum of a series of practices,' said Carl Caum, prototype engineer. 'The end goal is to deploy every [software] change at the push of a button.

But there are numerous problems you have to solve before you can do that.' In this two-part series, Caum and other continuous delivery experts outline those problems -- which define the prerequisites for charting a course to continuous delivery. This article, the first in the series, discusses what continuous delivery is, and explains how an established, iterative development practice sets the foundation for this new way of releasing software. The second article in the series examines why automated testing, adopting infrastructure-as-code practices and establishing a staging environment are essential aspects of this new approach to delivering software. Neither tip in this two-part series focuses primarily on the. But it bears repeating that software organizations cannot succeed at continuous delivery unless they make a sustained commitment to this process. 'Fundamentally, continuous delivery is a business decision, and if management doesn't get it, it's a tough sell.'

Said Mary Poppendieck, co-author with Tom Poppendieck of, among other books. The foundation: Iterative development Continuous delivery is not a software development methodology per se. It is a practice -- or rather, a series of practices -- of developing and testing software in a way that lets organizations quickly issue updates anytime. Many software developers and testers see it as a.

They are not wrong, but a broader definition is more accurate. What really prepares a software team to take on continuous delivery is solid experience with any form of iterative development. 'There is a tendency to pigeonhole methodologies,' said Eric Nguyen, director of business intelligence at, a company that sells requirements and test management software. But how a software team defines itself --, Agile, -- doesn't matter, said Stephen Forte, chief strategy officer at tool maker. 'They are all adapting toward continuous delivery.' What does matter is a mind-set that says, 'Be more iterative,' Nguyen said.

Whether you're, writing and testing code, or defining a business problem, 'you are breaking things down into smaller and smaller [pieces],' he added. And that is ultimately what continuous delivery is all about.

Building on earlier practices Continuous delivery requires software teams to build on earlier, familiar iterative practices, such as, Puppet Labs' Caum said. 'Continuous integration provides a quick feedback loop for developers, letting them know whether the code works.' Continuous delivery takes it a step further. 'It lets you deploy that code and see how it works,' he said.

In short, iterative development is the foundation on which continuous delivery rests. If you aren't practicing iterative development, you are not ready to take on continuous delivery. More: • • • Link. Will debut in theaters in December, followed by a robust online and broadcast push distributed by FilmBuff, all brought to consumers by Italian coffee maker Illy. Considering that it bears no branding (save for a shot of an Illy-sponsored conference and some scenes inside an Illy factory), the award-winning filmmakers don't want it to be labeled a 'branded documentary.'

They believe in its merits regardless of how it was backed—so much so that they intend to submit it for consideration at the Academy Awards as well as advertising competitions like the and Cannes Lions. 'It really doesn't matter any longer if it's branded entertainment or entertainment,' said Dominic Sandifer, Greenlight Media and Marketing president and co-executive producer. 'What matters is if it's a great story.' Getting a documentary bankrolled is harder than ever.

At the same time, documentaries are in vogue thanks to the growth of online video channels like Netflix and the increasing demand for premium video content on the Web, said Marc Schiller, CEO of event and film marketing firm Bond. And brands are realizing they don't need to plaster their logos on a film to get their company's positioning across. 'It's the purest form of content marketing,' said Rebecca Lieb, Altimeter Group analyst. A Small Section of the World's director, Lesley Chilcott, who received an Oscar for co-producing the admitted she was skeptical at the outset. Illy explained it had tried for in development for a year after its agronomist visited Asomobi, which provides coffee for Illy. After investigating the story for herself and being assured that she would have final cut—the last approval on a movie—she came on board. 'To be honest, at first I said no,' she said.

'How can I make a movie on coffee producers paid for by a coffee maker?' Similarly, a film about the damage that outdated dams can create. DamNation bears minimal branding, and its directors were also granted final cut. After completing a short theatrical run to qualify for the Academy Awards, DamNation was released online. It will also be available on Netflix. 'We're here to solve environmental problems,' said Joy Howard, vp, marketing at Patagonia.

'If we can show that, then people process what we're about, become loyal and commit to the brand.' Morgan Spurlock, who helmed the movie about branded content. Said filmmakers should be cautious about taking a marketer's money. Still, he's not against it, having partnered himself with Maker Studios on multiple brand-sponsored Web series in early 2015. 'You can have a brand come in and be a part of something, but you have to know they want to exert some sort of influence,' he said. '[Brand backing] can hinder the film's ability to compete in that space,' said Howard.

'Something that had a huge budget is not viewed on the same footing as a documentary that has a scrappier background.' Chilcott understands there may be bias against A Small Section of the World. But as more brand marketers finance filmmaking, she hopes people will judge on merit, not on who is footing the bill. 'I think in three to four years, this won't even be a story,' she said.