Archive for February, 2015

Tiny Clothing Wires to Ward Off the Big Chill

February 28, 2015 Leave a comment

WSJ_logo (1)Nano Letters

Nanotechnology specialists promise clothing that keeps more heat in


The problem with regular clothing is that, while it does a decent job of curtailing heat loss through contact or air, it’s terrible at capturing the radiant body heat that humans emit. Getty Images

Jan. 9, 2015

As this week’s cold spell in much of the U.S. reminded us, clothing is supposed to keep you warm. But scientists atStanford University say it could be doing a much better job—so much better, in fact, that it could put a serious dent in global energy consumption.

That’s the concept behind a high-tech fabric the researchers have developed. By coating textiles with a network of tiny, invisible metallic wires—a network that won’t be felt by wearers—the scientists discovered that they could boost a garment’s thermal properties without sacrificing functionality.

The problem with regular clothing is that, while it does a decent job of curtailing heat loss through contact or air, it’s terrible at capturing the radiant body heat that humans emit. A Mylar overcoat would contain this heat (it’s actually a form of electromagnetic energy), but it would make the wearer uncomfortable because the material doesn’t breathe. As the scientists explain in a new paper on the subject, “the plastic sheet and the aluminum film [in a Mylar coat] are not vapor permeable.”

Enter nanotechnology, the science of very small things. Nanotech fabric coatings are already being used to make garments shed water, kill microbes and block sunlight. The Stanford scientists found that by coating fabric with silver nanowires in a chemical bath, they could produce clothing that traps the body’s radiant heat but still breathes about as well as uncoated fabric and can be washed freely.

The material can even provide further warmth through the application of a little electricity. Imagine a sweater that comes with a charger like the one you use for your smartphone. You could even carry a battery in your pocket on the ski slopes.

So far the scientists have used the technology on cotton and synthetic fabrics, but Yi Cui, one of the paper’s authors, says that he’s convinced it will work on any textile. He adds that very few of the nanowires would come off. The wires would impart a silvery gray sheen to a fabric, but this could be masked by dyes.

How much warmer can such fabrics make you? Dr. Cui figures that clothing coated in nanowires might enable normally attired wearers to remain comfortable at indoor temperatures of 60 or even 55 degrees Fahrenheit during winter, without any added electricity. That may not sound like much, but the scientists point out that indoor heating accounts for nearly half of global energy consumption.

They have calculated that one person wearing their thermal textiles—with 12 watts added to warm things further—could save about 1,000 kilowatt-hours of electricity a year, or about what an average home uses in a month. The garments would be useless in summer, but the scientists are working on fabrics that can do the same thing in reverse, helping wearers to shed radiant body heat.

Dr. Cui, a professor of materials science and engineering, estimates that covering an entire human body would only take perhaps 50 cents worth of silver. He adds: “It’s going to be a lot cheaper than cashmere, for sure.”

Personal Thermal Management by Metallic Nanowire-Coated Textile


Department of Materials Science and Engineering, Department of Applied Physics, §Department of Civil and Environmental Engineering, and Department of Electrical Engineering, Stanford University, Stanford, California 94305, United States

Stanford Institute for Materials and Energy Sciences, SLAC National Accelerator Laboratory, 2575 Sand HillRoad, Menlo Park, California 94025, United States

Nano Lett., 2015, 15 (1), pp 365–371

DOI: 10.1021/nl5036572

Publication Date (Web): November 30, 2014

Copyright © 2014 American Chemical Society



Textiles and Fibers


Heating consumes large amount of energy and is a primary source of greenhouse gas emission. Although energy-efficient buildings are developing quickly based on improving insulation and design, a large portion of energy continues to be wasted on heating empty space and nonhuman objects. Here, we demonstrate a system of personal thermal management using metallic nanowire-embedded cloth that can reduce this waste. The metallic nanowires form a conductive network that not only is highly thermal insulating because it reflects human body infrared radiation but also allows Joule heating to complement the passive insulation. The breathability and durability of the original cloth is not sacrificed because of the nanowires’ porous structure. This nanowire cloth can efficiently warm human bodies and save hundreds of watts per person as compared to traditional indoor heaters.


metallic nanowires; textile; low-emissivity materials; thermal management

Categories: Uncategorized

Will They Need A Search Warrant For Your Brain?

February 27, 2015 Leave a comment

Will They Need A Search Warrant For Your Brain?

February 27th, 2015 | by Carrie Peyton Dahlberg
Brain imaging can already pull bits of information from the minds of willing volunteers in laboratories. What happens when police or lawyers want to use it to pry a key fact from the mind of an unwilling person?

Will your brain be protected under the Fourth Amendment from unreasonable search and seizure?

Or will your brain have a Fifth Amendment right against self-incrimination?

“These are issues the United States Supreme Court is going to have to resolve,” saidNita Farahany, a professor of law and philosophy at Duke University in Durham, North Carolina, who specializes in bioethical issues.

Those legal choices are likely decades away, in part because the exacting, often finicky process of functional magnetic resonance imaging (fMRI) could be thwarted if a reluctant person so much as swallowed at the wrong time. Also, a brain exam couldn’t be admitted in court unless it worked well enough to meet the legal standards for scientific evidence.

Still, the progress being made in “brain decoding” is so intriguing that legal scholars and neuroscientists couldn’t resist speculating during a law and memory session earlier this month at the annual conference of the American Association for the Advancement of Science in San Jose, California.

Our brains are constantly sorting, storing and responding to stimuli. As researchers figure out exactly where and how the brain encodes information, the fMRI also becomes a tool that can decode that information. The fMRI can identify the portions of the brain that are active, based on the increased quantity of freshly oxygenated blood they draw. Already, brain decoding can perform a version of that old magician’s trick — guess what card someone is looking at — with better than 90 percent accuracy, University of California, Berkeley neuroscientistJack Gallant told the group.

Farahany predicts that like most new science, brain decoding will break into the courtroom for the first time through a cooperative witness, someone who wants to use it to advance his or her case.

Stanford University law professor Henry Greely, who moderated the Feb. 13 law and memory session, suggested that a court might be especially open to novel techniques during the sentencing hearing in a death penalty case.

Both agreed that compelling someone to undergo a brain scan, the way a person might now be ordered to provide a urine sample or a DNA swab, would come much later. Even if the scan method were so non-invasive that some might argue it isn’t a search at all, Farahany thinks the courts will probably decide it is, and so will consider that you are protected from “unreasonable” brain searches. That, though, only means the authorities would need a search warrant for your brain.

As to self-incrimination, people cannot invoke the Fifth Amendment now to withhold certain purely physical information from their bodies, such as fingerprints. A court might draw parallels, she said, to brain activity.

Farahany has been monitoring early attempts to bring brain science into the courtroom with some sort of fMRI lie detection. So far, she said, no court has admitted it into evidence, concluding there is no scientific consensus that it works dependably.

Lie detection could prove much tougher than the more basic decoding going on in the lab, said Gallant, because lies are nuanced things, springing from a wide range of motives and emotional states.

Gallant is one of the best known researchers in a field that has been glibly described as computerized mind-reading. It is far from that, but brain decoding has made dramatic advances in visual imagery in Gallant’s lab. Some of his recent work has involved asking volunteers to watch a compilation of video clips showing brief glimpses of short scenes while an fMRI measures the oxygenation of blood in different parts of their brains. His lab’s computer models can then determine what that person might be watching when shown new video clips, ones that he or she has never seen before. This decoding can pick up general categories: woman, man, people talking, buildings or the ocean. But it won’t stop there.

“Brain decoding is going to keep getting better and better,” he said, because our understanding of how the brain encodes keeps growing. The two move in tandem, as inseparable as two sides of the same piece of toast.

We won’t need to wait for lie detector test results before brain decoding is capable of extracting information an investigator might want, such as the encryption code to a file or the combination to a safe.

“You could easily decode a number sequence from somebody’s brain from fMRI now. Internal, unpublished data from my lab suggests that would not be difficult to do,” Gallant said in a phone interview a few days after his talk.

What you couldn’t do, he said, is decode numbers from the brain of a squirming, uncooperative person who wants to mess with the MRI machine.

“No way,” Gallant said.

Not in our lifetimes. But in our children’s lifetimes? He’s pretty sure that improved techniques will emerge.

Categories: Uncategorized

​Google’s DeepMind artificial intelligence aces Atari gaming challenge

February 27, 2015 Leave a comment

DeepMind has published a paper detailing how its AI tech (has) not only learnt(learned) how to play a host of Atari games, but went (has gone) on to succeed in (at) a number of them. (proper English translation courtesy of your blogmaster)

DeepMind released a paper in scientific journal Nature this week detailing its deep Q-network (DQN) algorithm’s ability to play 49 computer games originally designed for the Atari 2600 – including a Pong-like game called Breakout, River Raid, Boxing, and Enduro – and do as well as a human player on half of them.

The Nature paper builds on previous work from DeepMind detailing how the algorithm performed on seven Atari 2600 games. While the system fared well compared to a human player, it lagged flesh-and-blood gamers when taking on the classic Space Invaders because the algorithm had to work out a longer-term strategy to succeed.

A video of DeepMind founder Demis Hassabis demonstrating DQN playing Breakout was posted on YouTube in April last year. At first, the algorithm struggles to return the ball but, after a few hundred plays, it eventually learns the best strategy to beat the game: break a tunnel into the side of the brick wall and then aim the ball behind the wall.

The system now excels at a number of games including Video Pinball, Boxing, Breakout, and Star Gunner, while its performance lags humans on Ms Pac-Man, Asteroids, and Seaquest.

“Strikingly, DQN was able to work straight ‘out of the box’ across all these games – using the same network architecture and tuning parameters throughout and provided only with the raw screen pixels, set of available actions, and game score as input,” Hassabis and co-author of the paper Dharshan Kumaran said in a blog post on Wednesday.

The pair add that DQN incorporates deep neural networks with re-enforcement learning.

“Foremost among these was a neurobiologically inspired mechanism, termed ‘experience replay’, whereby during the learning phase DQN was trained on samples drawn from a pool of stored episodes – a process physically realized in a brain structure called the hippocampus through the ultra-fast reactivation of recent experiences during rest periods (eg sleep),” they said.

Google acquired the DeepMind last year for a reported $400m and has since teamed up with Oxford University for joint research into AI. DeepMind was initially developed with financial backing from Tesla Motors’ CEO Elon Musk, who last year said he took a stake in the business with the hope of steering AI away from a Terminator-like future.

DeepMind said that its AI tech could end up helping to improve products like Google Now. “Imagine if you could ask the Google app to complete any kind of complex task: ‘OK, Google, plan me a great backpacking trip through Europe!'”

Read more on artificial intelligence

Categories: Uncategorized

Google’s Android Success Paves The Way For Sustained Growth

February 27, 2015 Leave a comment
 Feb. 26, 2015 12:15 PM ET  
  • Android is the predominant global platform for smart connected devices.
  • The sheer mass of Android users is prompting developers to shift their resources to Android applications.
  • Despite growing competition, Google should continue to build its advertising revenues at a double-digit pace.
  • Google stock is a reasonable value and can be added on any pull back.

Google (NASDAQ:GOOG) (NASDAQ:GOOGL) is winning the war to power smart connected devices and the battle is not even close. In 2014, Google’s Android OS ran 49% of the world’s connected devices and that ratio is expected to grow to 59% this year and 63% in 2016. Once dominant Windows is holding onto a 14-15% share based on its enormous strength in enterprise, and iOS is holding ground at 11-12%.

(click to enlarge)

The rise of Android has been nothing short of breathtaking. The first Android powered mobile device was introduced October 22, 2008.

Google’s strategy for Android tore a page from the early days of Microsoft (NASDAQ:MSFT) under Bill Gates. MS-DOS was a fledgling operating system and Gates wanted to make it the standard for personal computers and the strategy to do so was simple and effective – make it cheap and price it exclude competitors. Microsoft’s early pricing policies were designed to make it uneconomical to use any other OS, a fact found during the antitrustproceedings against Microsoft.

The early version of DOS could run on only 12Kb of memory making it extremely fast, even on the relatively slow CPU’s of the 1980’s when it was introduced. In combination, a very fast OS with a small footprint priced to make it very hard for alternatives to gain traction was a successful strategy.

Google goes one better. By making Android open source and free, Google made it impossible for anyone to compete on price. Google’s mobile advertising revenue paid the freight and Microsoft found it hard to get OEM’s to use its Windows Phone OS and pay for a license. Living under the umbrella of highly-priced iOS systems, OEM’s choosing Android could develop devices for all price points and the sheer volume of Android devices to follow would fuel the virtuous cycle of developers building the applications ecosystem to rival all comers.

Google seemed to understand what I see as the successful strategy of most technology start-ups today – establish a world-leading user base before getting too worried about profitability. Facebook (NASDAQ:FB) used the same strategy with enormous success. Netflix (NASDAQ:NFLX) seems to have a similar plan. Music streamers like Pandora (NYSE:P) and Spotify continue to place the bulk of their effort on building their user base.

Google’s success in smartphones is perhaps the most impressive.

Despite a late start, Android smartphones now have 1.6 billion users, more than four times second place iOS and dramatically higher than Windows Phone OS, BlackBerry (NASDAQ:BBRY) or, Symbian who were established players before the first Android device was launched.

(click to enlarge)

By mid-2013, more developers were using Android as a platform than any other operating system.

(click to enlarge)

At the same time, developers are finding that profitability of Android applications is rising in relation to iOS applications and, when media advertising costs (which are 20-50% lower on Android devices than iOS devices) are considered, many Android apps are now as or more profitable than comparable iOS applications.

(click to enlarge)

Source: Business Insider

Google has by far the lion’s share of mobile advertising revenue but that revenue is under increasing competitive pressure as Facebook and others vie to cut their way into the mobile advertising pie.


Even with competition, the mobile advertising is growing at an extraordinary clip and Google is well positioned to continue to benefit.

(click to enlarge)

To keep its mobile advertising dollars growing in the face of increasing competition, Google has to continue to expand the global use of its Android OS while ensuring its applications like Google Search, Gmail and Google Maps remain the gold standard for smart connected devices regardless of their operating system.

I think Google will do just that. Google stock has climbed about 10% since hitting $490 about a month ago. Any pullback will be a buying opportunity. I have no current position in Google.

Categories: Uncategorized

In Big Step for Artificial Intelligence, Machine Learns To Master Video Games

February 25, 2015 Leave a comment
Txchnologist, sponosored by GE
February 25th, 2015


by Michael Keller

Competitive gamers beware: There’s a new top dog in the classic arcade category. This champion has cracked 49 vintage Atari 2600 titles, from Breakout to Star Gunner and Space Invaders, outperforming professional game testers by more than 1,000 percent in some cases. Success didn’t come easy. Improvements happened one attempt at a time through an intense period of training, which included playing and then studying each frame of every game millions of times without a break.

It’s understandable if this newly minted master’s name isn’t familiar to the millions who play Call of Duty or GTA every day, because its creators have been keeping it under wraps while they improved its algorithms. The name is deep-Q network, but you can call it DQN.

Its developers at a Google enterprise called Deepmind say the system is capable of quickly learning how to excel at games even though it starts with minimal background information. DQN represents a significant advance in artificial intelligence, combining machine learning and the principles of neuroscience to make a computer program learn like animals do.

“This work is the first time anyone has built a single general learning system that can learn directly from experience to master a wide range of challenging tasks—in this case a set of Atari games—and perform at or better than human level on those games,” says Demis Hassabis, an AI researcher and neuroscientist. His team’s work was published today in the journal Nature.

They constructed the DQN program using something called a deep convolutional neural network, a set of learning algorithms inspired by biological nervous systems that can ingest lots of data all at once and compute values from them. The group’s achievement comes in their novel merging of two types of machine learning, deep and reinforcement learning, to train their system’s artificial neural network. These approaches allow DQN, which they call an AI agent, to start off completely ignorant of how the game works and graduate to become a master player.

In fact, DQN starts off much as a human does if faced with a new video game and no instruction booklet. All it gets in terms of inputs are the data contained in each pixel on the screen and score information. The first time it plays, it hits a random key. If that keystroke is rewarded with an increase in score, it learns that its response works given the state of all the data in the game at that moment. This scoring reward reinforces DQN’s decision-making functions to return to more rewarding scenarios. Doing this over and over lets DQN update its neural network so that it learns the rules of the game to get bigger and bigger rewards in the form of higher scores.

This is a fundamentally different approach from other famous AIs like IBM’s Deep Blue and Watson. These systems’ abilities are preprogrammed, with computer scientists and chess masters (in the case of Deep Blue) plugging in their knowledge so that the agent selects the best solutions from that dataset based on statistical processing. DQN, on the other hand, starts with no input data besides what it perceives in real time that flows in from the current state of the game. “This is more realistic for real-world applications,” says Koray Kavukcuoglu, a computer scientist and engineer on the team. “This active agent goes all the way from perception to making a decision.”

That decision ends up being a good one based on what would be a grueling training regimen for a human gamer. DQN receives data a frame at a time from an Atari 2600 emulator that is separate from the AI. It selects a move out of all possible based on the reward it expects to receive and sends that command back to the emulator. Then the cycle repeats. DQN trains on each game title through 50 million frames, which amounts to 38 days of game experience. The agent requires considerable amounts of computation, though it can run on nothing more than a desktop computer, no supercomputer required.

Hassabis says the point of their work isn’t to master video games. These are just a proxy for other applications where large amounts of unstructured data must be quickly processed to solve problems. He says their work could show up in Google products like search, language translation and other functions. “We can apply this to any unstructured data, like asking your phone to plan a trip to Europe, and it goes and books the flights and hotels automatically,” he says. “In the future, it could help do science like disease research and understanding climate science—anywhere there are huge amounts of data that scientists must deal with.”

Bernhard Schölkopf, a researcher at the Max Planck Institute for Intelligent Systems who was not involved in the DQN work, says the Google effort is “a remarkable example of the progress being made in AI.”

DQN is interesting, he says, because the Deepmind team produced a successful system with improved capabilities using methods that have been understood for decades. Their approach created an adaptable agent that performed as well or better than human game testers in the wide range of environments present in the games.

“In the early days of AI, beating a professional chess player was held by some to be the goldstandard,” Schölkopf wrote in commentary printed in the same issue of Nature. “This has now been achieved, and the target has shifted as we have grown to understand that other problems are much harder for computers, in particular problems involving high dimensionalities and noisy inputs. These are real-world problems, at which biological perception–action systems excel and machine learning outperforms conventional engineering methods.”

All gifs created from video courtesy of Google DeepMind (with permission from Atari Interactive Inc.)

Categories: Uncategorized

Leggings in Loud Prints Are a Hit

February 25, 2015 Leave a comment

WSJ_logo (1)

 (and More Flattering Than Most Women Expect)

Vanessa Cornell, center, exercises at a high-intensity, dance-based interval fitness class at Anna Kaiser’s AKT In Motion studio on Manhattan’s Upper East Side.

Strong sales are fueling small brands such as Onzie and Zara Terez with social media followings

Many women are trading in neutral workout clothes for leggings in crazy colors and patterns made by previously unknown brands. WSJ contributor Erin Geiger Smith reports. Photo: Onzie

Feb. 24, 2015

At Bandier, a Manhattan fitness boutique, one of the most popular racks is packed with leggings that range from pretty to pretty funny: They are covered in loud prints of cherry blossoms or green kale, neon-pixel designs or emojis.

The era of simple, black leggings or yoga pants as the universal workout bottoms appears to be over. Instead, yoga classes, boutique fitness studios and running trails are filled with Spandex leggings in look-at-me prints, often from brands that aren’t exactly household names. Devotees swear the patterned leggings are more flattering—and just more fun—than their drab brethren.

“We can barely keep them in stock,” says Donna Burke, managing partner of Atlanta Activewear, in Georgia, which recently sold out of $74 “Malibu” scene-printed leggings from Onzie, a four-year-old Los Angeles company. In the past year or so, sales of printed leggings “went from very minimal to an absolute have-to-have trend,” Ms. Burke says.

Liz Kelley, who works at an Austin, Texas, software company, wears leggings that have one-half of a tiger’s face on each thigh. “They were so ridiculous, I couldn’t not buy them,” Ms. Kelley says. “They make people laugh.”

Not everyone wants to highlight their thighs with an explosion of color. “I hate patterned leggings” and “patterned leggings are ugly” posts abound on social media. “I’m just the type of person that likes black or gray or navy,” says Melissa Scott, a payroll director in Augusta, Ga., who practices yoga and lifts weights. Patterned leggings, she says, “draw too much attention when you’re in the gym. They’re just too loud.”

Vanessa Cornell, center, exercises at a high-intensity, dance-based interval fitness class at Anna Kaiser’s AKT In Motion studio on Manhattan’s Upper East Side. Photo: Steve Remich for The Wall Street Journal

Yet many people who wear them started out as skeptics, brand and store owners say. Rebecca McCrensky, founder of Altar Ego, an Andover, Mass., athletic-wear company, says women might start out buying a “gateway” legging in a black-based print, such as her design with a red-and-white skull on the side.

Many printed leggings “have horrible hanger appeal,” Bandier owner Jennifer Bandier acknowledges. “But I always say to people, ‘Those leggings don’t look like they’re anything, and then you put them on and you’re like, Oh my God, I look like a supermodel,’ ” she says.

The busier the print, the more flattering it is, because the eye follows its movement, proponents say. “Prints hide everything and enhance all the right parts,” says Onzie owner Kimberly Swarth. She says a high-quality fabric keeps a print from looking stretched on larger parts of the leg. Ms. Burke says Onzie’s Malibu design looks great on because the pattern’s darkest part is high on the leg, with lighter flowers and sky lower down.

What to wear with patterned leggings can vary. Some women wear a simple fitted tank top or loose sleeveless T-shirt. Some wear only a matching sports bra. For more coverage, tie a sweatshirt around your waist, Onzie’s Ms. Swarth says.

Sales of women’s leggings advanced 18% in 2014 to $1.1 billion, with sales of “active” leggings growing twice as fast as leggings overall, according to NPD Group. In the crowded “ath-leisure” segment, prints do the trick of offering shoppers more reasons to buy.

When women embrace the trend, they often buy multiple pairs, says Atlanta Activewear’s Ms. Burke; people notice when you wear the same vibrant print several times in a row. “You can only wear them once every two or three weeks because they stand out,” agrees Kelly Hershman, a 37-year-old spin and barre instructor in Dallas who calls leggings a “passion” and says she owns more than 80 pairs.

Liz Holt, 47, says she didn’t own any patterned leggings six months ago, but was converted after seeing others wear them. “I had the mentality before that pattern would make you look bigger, but they really don’t,” she says. She says she feels confident in front of the mirror during a workout. One print—of audio speakers—didn’t work for her, but she found six patterns that did, including a design of lightning over Paris.

Many aficionados discover small leggings brands like Emily Hsu Designs, of New York, on Instagram. The patterns on her leggings include “Bang Bang Graffiti,” a cartoon-like print including nail polish, kittens and the words “Bang” and “Zap” in a rainbow of colors. Ms. Hsu started her company less than a year ago as a one-woman operation, after friends kept asking about leggings she was making for herself and her daughter.

In her first month of business, July 2014, Ms. Hsu started posting on Instagram and sold 50 pairs; by January she was selling 400 a month, for around $50 each. She says she expects 2015 gross sales to exceed $250,000; her leggings retail for around $50. Customers come from all over the U.S., England and Thailand.

“It happens to be that right time, where everyone is really embracing the printed legging thing,” Ms. Hsu says. Many people wear them outside the gym with a cute top and ankle boots, she adds.

“You wouldn’t think I could wear donuts on my leg,” Ms. Hsu adds. But she did wear donut-printed leggings to yoga recently. “People loved them,” she says.

Zara Terez Tisch, founder and chief executive of activewear brand Zara Terez, says some of her company’s original prints began as photographs taken with an iPhone. There are patterns of multicolored cassette tapes and lavender hydrangea, and the company recently collaborated with Toast, a rescue spaniel and social media sensation, on a collage of images of the dog, tongue out and wearing various outfits.

Anna Kaiser, owner of Manhattan dance-based fitness studio AKT in Motion, has sold patterned leggings at her studio since she opened in 2013, mostly from smaller companies like Zara Terez and Cândida Mariá. “There was so much space in the fashion fitness realm for them to emerge,” Ms. Kaiser says.

Categories: Uncategorized

The Intelligent Guide to Social Media Management Platforms

February 25, 2015 Leave a comment
Categories: Uncategorized


February 25, 2015 Leave a comment

Retargeting: Why Your Mobile Marketing<br /><br /><br /><br />
Strategy is Incomplete Without It



As dramatic as the evolution of mobile marketing technology has been recently, mobile marketing tactics have been evolving as well. When mobile-first companies initially started advertising on smartphones and tablets, their primary goal was to acquire large volumes of loyal users at the lowest costs. Lately, larger brands have entered the mobile space and have focused their strategies largely on awareness, instead of purely on acquisition. In either case, those messages are targeted at users who aren’t yet familiar with your app or your brand. But what about the users who you’ve already acquired or reached? If your sole focus is on new users, you could be missing out on a world of opportunity within your current user base. According to Gartner, 80% of your company’s future revenue will come from just 20% of your existing customers. You need to focus on maximizing the value you get from those current users, and thanks to the emergence of new mobile technologies, you can. (Remember that 80% of your company’s future revenue will come from just 20% of your existing customers).

Categories: Uncategorized


February 24, 2015 1 comment
ULTIMATE GUIDE TO ASSESSING YOUR DIGITAL MARKETING PROGRAM Every day, marketers take steps to enhance their digital marketing programs. A new email here, a Twitter campaign there, a list growth initiative going live soon. With buyer sophistication growing daily, you’re constantly having to re-up your marketing game with increasingly smart campaigns. Given limited resources, it’s challenging to find the time to pause long enough to consider your next move, let alone evaluate how your efforts are working. But before you dive headlong into the next task, consider this: How will you know if you’re improving (or regressing) unless you step back to take stock of your digital marketing program? Assessments can be the starting point for establishing – and achieving – goals in the upcoming months and years, but that’s just the first benefit. They’re also valuable for sharing with coworkers to help explain where you are and where you want to go. And last but certainly not least, they raise awareness for digital marketing among the executive staff and help get budget buy-in for new campaigns. Done properly, periodic assessments enable you to attack upcoming initiatives with renewed vigor and make substantial progress compared to what you’d be able to achieve without taking the time to regroup. Getting the Timing Right Performing thorough, periodic assessments is highly recommended – but you have to make sure you pick the right time to carry through with them, otherwise you’ll spread yourself too thin. During budget season, for example, you’re busy figuring out how much key programs will cost, tallying technology expenses, and making a case for additional headcount and resources. For most marketers, this exercise is pretty all-consuming with specific deadlines attached to it. So naturally, assessment exercises will fall by the wayside. With that in mind, look to do your evaluations at a different time of year than your budget planning, heaviest sales period, etc. Pick a month that’s a little less hectic to begin to take stock of your digital programs. For each area of your program, evaluate what you’re doing, compare to benchmarks where possible and develop action plans to improve your execution and overall sophistication. While the key areas of assessment will vary a bit based on your industry and product or service sold, most digital marketers should systematically review at least the following seven areas: • Target Market • Pipeline • Content • Engagement • Technology and Skills • Mobile Marketing • Social Media In this white paper, we’ll take a closer look at each of these seven areas, advise you on key questions to ask as you’re assessing yourself, and provide worksheets and related tools to help you get the most out of your evaluation. Remember: Done thoroughly and thoughtfully across the department, your digital marketing assessment can be both a report card for how well you’re doing and a springboard for substantial improvement moving forward. Let’s get started. Tips for Performing a Marketing Self-Assesment


Categories: Uncategorized

The hidden story behind the code that runs our lives.

February 23, 2015 Leave a comment
February 23, 2015 The Rich Man's Dropout Club   Foreign Students Aren't Edging Out Locals (Usually)  A Call to Worship, and a Call to Educate

The Believers

The hidden story behind the code  that mimics our brains and runs our lives 1

Michelle Siu

Geoffrey Hinton splits his time between the U. of Toronto 
and Google.

Magic has entered our world. In the pockets of many Americans today are thin black slabs that, somehow, understand and anticipate our desires. Linked to the digital cloud and satellites beyond, churning through personal data, these machines listen and assist, decoding our language, viewing and labeling reality with their cameras. This summer, as I walked to an appointment at the University of Toronto, stepping out of my downtown hotel into brisk hints of fall, my phone already had directions at hand. I asked where to find coffee on the way. It told me. What did the machine know? How did it learn? A gap broader than any we’ve known has opened between our use of technology and our understanding of it. How did the machine work? As I would discover, no one could say for certain. But as I walked with my coffee, I was on the way to meet the man most qualified to bridge the gap between what the machine knows and what you know.

Geoffrey Hinton is a torchbearer, an academic computer scientist who has spent his career, along with a small band of fellow travelers, devoted to an idea of artificial intelligence that has been discarded multiple times over. A brilliant but peripheral figure. A believer. A brusque coder who had to hide his ideas in obscure language to get them past peer review. A devotee of the notion that, despite how little we understand the brain, even a toy model of it could present more computational power and flexibility than the rigid logic or programmed knowledge of traditional artificial intelligence. A man whose ideas and algorithms might now help power nearly every aspect of our lives. A guru of the artificial neural network.

Such networks, which have been rebranded “deep learning,” have had an unparalleled ascent over the past few years. They’ve hit the front page ofThe New York Times. Adept at processing speech, vision, and other aspects of the messy interface with humanity that has been sped up by ubiquitous mobile devices, nets have been embraced by Google, Facebook, Microsoft, Baidu, and nearly any other tech leader you can imagine. At these companies, neural nets have proved an efficient way to soak up vast amounts of data and make highly valuable predictions from it: How do you make a data center more energy efficient? Will this user want to buy a car soon? Tech companies compete fiercely for every coder who shows an aptitude in developing neural nets, often luring them away from careers in academe. Last year, for Google, that included reportedly spending more than $400-million on a company, DeepMind, with no products, only a way of integrating memory into learning algorithms. And before that, Google bought Hinton’s services for an undisclosed sum.

“If you want to understand how the mind works, ignoring the brain is probably a bad idea.”

There’s seemingly no crevice of technology that hasn’t felt the creep of deep learning. Over the months, announcements pile up in my inbox: Deep learning that identifies autism-risk genes. Deep learning that writes automated captions for pictures and video. Deep learning toidentify particles in the Large Hadron Collider. Deep learning to guide our cars and robots.

With each announcement, deep learning has nudged the notion of artificial intelligence back into the public sphere, though not always to productive ends. Should we worry about the robot revolution to come? Spoiler alert: not right now; maybe in 50 years. Are these programmers foolish enough to think they’re actually mimicking the brain? No. Are we on the way to truly intelligent machines? It depends on how you define intelligence. Can deep learning live up to its hype? Well …

Such a clamor has risen around deep learning that many researchers warn that if they don’t deliver on its potential, they risk a backlash against all of artificial intelligence. “It’s damaging,” says Yann LeCun, a professor at New York University who now directs Facebook’s AI research. “The field of AI died three or four times because of hype.”

Several of these deaths came at the hands of artificial neural networks. In the 1960s and again in the 1980s, neural nets rose like a rocket, only to fall to earth once the limitless dreams of their creators met the limits of transistors. During those dark days, the few devoted researchers, like Hinton and LeCun, were down in the “rat holes,” ignored by the academic world, one longtime Hinton collaborator told me. Few would have expected a third ascent. Many still fear another crash.

Hinton, however, is all confidence. He had invited me to Toronto to learn about this new era’s deep history. For a decade, he’s run a weeklongsummer school on neural nets; I stopped by while it was under way. It was a hot day of dry presentations and young men, mostly men, with overflowing hopes packed into overflowing rooms. I found Hinton in his office, which he’s kept despite becoming emeritus. A bad back leaves him standing; when he travels to Google’s headquarters in California for half the year, he goes by train. Decorating his door were handwritten digits, indecipherable, pulled from a data set that provided some of neural networks’ earliest successes.

It’s hard for Hinton, 67, not to feel a bit pleased with himself. After a lifetime on the periphery, he now has a way of connecting with nearly anyone he meets. For example, when in Toronto, he works out of Google’s office downtown, which is filled with advertising employees. He’s the only researcher. Occasionally, a curious employee sidles up and asks, “What do you do?”

“Do you have an Android phone?” Hinton replies.


“The speech recognition is pretty good, isn’t it?”


“Well, I design the neural networks that recognize what you say.”

The questioner nearly always pauses in thought.

“Wait, what do you mean?”

For nearly as long as we’ve attempted to create “thinking” computers, researchers have argued about the way they should run. Should they imitate how we imagine the mind to work, as a Cartesian wonderland of logic and abstract thought that could be coded into a programming language? Or should they instead imitate a drastically simplified version of the actual, physical brain, with its web of neurons and axon tails, in the hopes that these networks will enable higher levels of calculation? It’s a dispute that has shaped artificial intelligence for decades.

One pioneer of brain imitation, in the late 1950s, was Frank Rosenblatt, a psychologist at the Cornell Aeronautical Laboratory. He was inspired by the work of Donald O. Hebb, who a decade earlier had predicted how learning might work: As one neuron fires and activates another, repeatedly, the cellsimprove their joint efficiency. “The cells that fire together, wire together,” as cognitive scientists like to say. This simple idea, Rosenblatt thought, was enough to build a machine that could learn to identify objects.

Build it he did: You can see parts of the Perceptron, as he called it, in the Smithsonian. Its operation was simple. Taking up an entire lab room, it worked in three layers. At one end, a square cluster of 400 light sensors simulated a retina; the sensors connected multiple times to an array of 512 electrical triggers, each of which fired, like a neuron, when it passed a certain adjustable threshold of excitement. These triggers then connected to the last layer, which would signal if an object matched what the Perceptron had been trained to see.

Trained is the operative word: The Perceptron was not programmed, but trained. It could not learn on its own. Rosenblatt created a formula that calculated how much the Perceptron was wrong or right, and that error could then be traced back and individually changed in those 512 triggers. Tweak these weights enough, and the Perceptron could begin to recognize very basic patterns, such as standardized letter shapes.

It was a thrilling development, and Rosenblatt wasn’t afraid to share it. In the summer of 1958, he held a news conference with his sponsor, the U.S. Navy. As so often happens in science, he began to talk about the future. To researchers then, he sounded foolish; heard today, prescient. The New York Times caught the gist of it:

The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence. … Later Perceptrons will be able to recognize people and call out their names and instantly translate speech in one language to speech and writing in another language, it was predicted.

Rosenblatt’s fame irked his peers, many of whom had opted to pursue rules-based artificial intelligence; both sides were chasing after the same military research dollars. Most prominently, Marvin Minsky and Seymour Papert, two eminent computer scientists at the Massachusetts Institute of Technology, sought to replicate the Perceptron and expose its flaws,culminating in a 1969 book seen as causing a near-death experience for neural nets. The Perceptron had intrinsic limitations, they said. Most basically, it could not learn “exclusive or,” a basic bit of logic that holds true only if one input is true and the other false.

Learning that function would require an additional layer in the Perceptron. But no one could figure out a biologically plausible way to calculate and transmit the adjustments for such a “hidden” layer. The neural net compressed information, in effect, which could not then be retrieved. It felt like time—it ran only forward. The learning had stopped, and the grant money vanished. Minsky and Papert had won.

Frustrated, Rosenblatt found other outlets. He became fascinated by a project attempting to show that brain cells transplanted from one rat to another would retain memory. The work didn’t last long—he died young, in a 1971 sailing accident, alone, on his birthday. It seemed the neural network would die with him.

No one told Geoff Hinton, who after bouncing around in college among chemistry, physics, physiology, philosophy, and psychology, managed to enroll, in 1972, in a graduate program in artificial intelligence at the University of Edinburgh.

Hinton is the son of a peripatetic British clan whose members tended to do what they thought best. One great-great-grandfather was George Boole, whose algebra became the basis of the computer age, including that exclusive-or that defied Rosenblatt; another owned a Victorian sex club. His grandfather ran a mine in Mexico, and his father was an entomologist: “He thought things with six legs were much more interesting than things with two legs.”

As a teenager, Hinton became fascinated with computers and brains. He could build electrical relays out of razor blades, six-inch nails, and copper wire in 10 minutes; give him an hour, and he’d give you an oscillator.

His view then was the same he has today: “If you want to understand how the mind works, ignoring the brain is probably a bad idea.” Using a computer to build simple models to see if they worked—that seemed the obvious method. “And that’s what I’ve been doing ever since.”

This was not an obvious view. He was the only person pursuing neural nets in his department at Edinburgh. It was hard going. “You seem to be intelligent,” people told him. “Why are you doing this stuff?”

Hinton had to work in secret. His thesis couldn’t focus on learning in neural nets; it had to be on whether a computer could infer parts, like a human leg, in a picture. His first paper on neural nets wouldn’t pass peer review if it mentioned “neural nets”; it had to talk about “optimal networks.” After he graduated, he couldn’t find full-time academic work. But slowly, starting with a 1979 conference he organized, he found his people.

“We both had this belief,” says Terrence J. Sejnowski, a computational neurobiologist at the Salk Institute for Biological Studies and longtime Hinton collaborator. “It was a blind belief. We couldn’t prove anything, mathematical or otherwise.” But as they saw rules-based AI struggle with things like vision, they knew they had an ace up their sleeve, Sejnowski adds. “The only working system that could solve these problems was the brain.”

Hinton has always bucked authority, so it might not be surprising that, in the early 1980s, he found a home as a postdoc in California, under the guidance of two psychologists, David E. Rumelhart and James L. McClelland, at the University of California at San Diego. “In California,” Hinton says, “they had the view that there could be more than one idea that was interesting.” Hinton, in turn, gave them a uniquely computational mind. “We thought Geoff was remarkably insightful,” McClelland says. “He would say things that would open vast new worlds.”

They held weekly meetings in a snug conference room, coffee percolating at the back, to find a way of training their error correction back through multiple layers. Francis Crick, who co-discovered DNA’s structure, heard about their work and insisted on attending, his tall frame dominating the room even as he sat on a low-slung couch. “I thought of him like the fish inThe Cat in the Hat,” McClelland says, lecturing them about whether their ideas were biologically plausible.

The group was too hung up on biology, Hinton said. So what if neurons couldn’t send signals backward? They couldn’t slavishly recreate the brain. This was a math problem, he said, what’s known as getting the gradient of a loss function. They realized that their neurons couldn’t be on-off switches. If you picture the calculus of the network like a desert landscape, their neurons were like drops off a sheer cliff; traffic went only one way. If they treated them like a more gentle mesa—a sigmoidal function—then the neurons would still mostly act as a threshold, but information could climb back up.

While this went on, Hinton had to leave San Diego. The computer-science department had decided not to offer him a position. He went back to Britain for a lackluster job. Then one night he was startled awake by a phone call from a man named Charlie Smith.

“You don’t know me, but I know you,” Smith told him. “I work for the System Development Corporation. We want to fund long-range speculative research. We’re particularly interested in research that either won’t work or, if it does work, won’t work for a long time. And I’ve been reading some of your papers.”

Hinton won $350,000 from this mysterious group. He later learned its origins: It was a subsidiary of the nonprofit RAND Corporation that had ended up making millions in profit by writing software for nuclear missile strikes. The government caught them, and said they could either pay up or give the money away—fast. The grant made Hinton a much more palatable hire in academe.

Back in San Diego, Rumelhart kept on the math of their algorithm, which they started calling back-propagation. When it was done, he simulated the same exclusive-or that had defied Rosenblatt. He let it run overnight. When he returned the next morning, the neural net had learned.

By the late 1980s, neural nets were everywhere. They were back in The New York Times, which reviewed a technical book written by the San Diego team. Companies promised to solve a fleet of problems with neural nets. Even Hollywood took notice: “My CPU is a neural net processor,” Arnold Schwarzenegger’s robotic terminator said. “A learning computer.”

Mark Abramson for The Chronicle

Yann LeCun, a professor at NYU, also leads Facebook’s artificial-intelligence laboratory.

Hinton spent a few years at Carnegie Mellon University. With Rumelhart and Ronald J. Williams, he had shown that neural nets could learn multiple layers of features, essential to proving that complex calculations could arise from such networks. But he was dissatisfied with back-propagation, which, it turned out, several others, including LeCun, had also invented—it just didn’t seem powerful enough. With Sejnowski, he developed a neural net modeled after the Boltzmann distribution, a bit of statistical physics that describes the probabilities for how matter shifts energy states under changing temperatures. (Think water turning to ice.) It was classic Hinton: He builds code from physical analogies, not pure math. It was a fertile time. Sejnowski remembers sitting in his kitchen, getting a call from Hinton: “Terry, I’ve figured out how the brain works,” Hinton said. Over the last 30 years, Sejnowski adds, Hinton has called and told him that a dozen times.

The world didn’t join Hinton in that excitement for long. The research hit new walls. Neural nets could learn, but not well. They slurped up computing power and needed a bevy of examples to learn. If a neural net failed, the reasons were opaque—like our own brain. If two people applied the same algorithm, they’d get different results. Engineers hated this fickleness, says Facebook’s LeCun. This is too complicated, they said, therefore the people who use it must believe in magic. Instead, coders opted for learning algorithms that behaved predictably and seemed to do as well as back-propagation.

As they watched neural nets fade, they also had to watch Rumelhart, the man most responsible for their second wave, decline. He was slowly succumbing to Pick’s disease, a rare dementia that, McClelland suggests, may arise from overusing the neurons in the brain. (He died in 2011.) The Cognitive Science Society began offering an award in Rumelhart’s honor in 2001; Hinton was its first recipient.

The field lost its vision, says Yoshua Bengio, a professor at the University of Montreal who, in the 1990s, joined Hinton and LeCun as a neural-net partisan. Though a neural net LeCun had modeled after the visual cortex was reading up to 20 percent of all U.S. bank checks, no one talked about artificial intelligence anymore. “It was difficult to publish anything that had to do with neural nets at the major machine-learning conferences,” Bengio told me. “In about 10 years, neural nets went from the thing to oblivion.”

A decade ago, Hinton, LeCun, and Bengio conspired to bring them back. Neural nets had a particular advantage compared with their peers: While they could be trained to recognize new objects—supervised learning, as it’s called—they should also be able to identify patterns on their own, much like a child, if left alone, would figure out the difference between a sphere and a cube before its parent says, “This is a cube.” If they could get unsupervised learning to work, the researchers thought, everyone would come back. By 2006, Hinton had a paper out on “deep belief networks,”which could run many layers deep and learn rudimentary features on their own, improved by training only near the end. They started calling these artificial neural networks by a new name: “deep learning.” The rebrand was on.

Before they won over the world, however, the world came back to them. That same year, a different type of computer chip, the graphics processing unit, became more powerful, and Hinton’s students found it to be perfect for the punishing demands of deep learning. Neural nets got 30 times faster overnight. Google and Facebook began to pile up hoards of data about their users, and it became easier to run programs across a huge web of computers. One of Hinton’s students interned at Google and importedHinton’s speech recognition into its system. It was an instant success, outperforming voice-recognition algorithms that had been tweaked for decades. Google began moving all its Android phones over to Hinton’s software.

It was a stunning result. These neural nets were little different from what existed in the 1980s. This was simple supervised learning. It didn’t even require Hinton’s 2006 breakthrough. It just turned out that no other algorithm scaled up like these nets. “Retrospectively, it was a just a question of the amount of data and the amount of computations,” Hinton says.

Hinton now spends half his year at Google’s campus, preventing its engineers from traveling down dead ends from decades past. He’s also exploring neural nets that might have been discarded as unworkable, and pursuing what he calls “dark knowledge.” He often spends the full day coding, something he would never have been able to do as a professor. When I asked about the most productive part of his career, he replied without hesitation: “The next five years.”

Google uses deep learning in dozens of products. When I visited Hinton this summer, it had just begun applying deep learning to language translation. Google has encoder and decoder networks for each language, which convert each word into a big matrix of numbers that capture much of its meaning—the numbers for “cat” and “dog,” say, will be much more similar than those for “dog” and “auburn.” The English encoder passes those numbers to the French decoder, for example, which makes an overall prediction with those numbers, and then compares that prediction with word-by-word analysis as it goes, all the while comparing the results with known translations and back-propagating the errors. After a few months, it was already working well, Hinton said.

There’s some irony that Hinton, such an iconoclast, is now working for a large company. But it’s unavoidable. These companies have the tools to make deep learning work, and universities do not. During a coffee break at Hinton’s summer school, I overheard a young academic griping about not getting training data from one company. After a few minutes, he added, “I’m going to Microsoft, so this won’t be a problem for me soon.”

“There is the slight danger that if enough big companies hire enough of the researchers, there won’t be enough left in the universities to produce new students and to do the pure basic research,” Hinton says. But the tech companies are aware of this problem, he adds. Google, for example, is eager for Bengio to keep on his basic research, Hinton says.

“We could have moved a lot faster, if it weren’t for the ways of science as a human enterprise.”

At Facebook, LeCun has been recreating a new version of Bell Labs, where he worked in the 1990s. They will publish their work, he promised, if at somewhat of a time delay. “I don’t think academic research is going to be put out of existence,” he adds. The tech rush for talent is creating more students than they’re losing to industry. While he’s wary of hype, he’s also confident that deep learning is just getting started. “I wouldn’t have been doing this for 20 years, against the better judgment of everybody, unless I believed in these methods.”

Bengio, for one, can’t help but think back to all the grants not financed, the peer-review attacks from scientists invested in older approaches to computer vision or speech recognition. “We could have moved a lot faster, if it weren’t for the ways of science as a human enterprise,” he says. Diversity should trump personal biases, he says, “but humans tend to discard things they don’t understand or believe in.”

Now the neural-net researchers are dominant. Even faculty members at MIT, long a bastion of traditional artificial intelligence, are on board.

“We were little furry mammals scrambling under the feet of dinosaurs,” Salk’s Sejnowski says. “Basically, the little afraid mammals won. The dinosaurs are gone. It’s a new era.”

Many of the dreams Rosenblatt shared in his news conference have come true. Others, like computer consciousness, remain distant. The largest neural nets today have about a billion connections, 1,000 times the size of a few years ago. But that’s tiny compared with the brain. A billion connections is a cubic millimeter of tissue; in a brain scan, it’d be less than a voxel. We’re far from human intelligence. Hinton remains intrigued and inspired by the brain, but he knows he’s not recreating it. It’s not even close.

Speculation remains on what neural nets will achieve as they grow larger. Many researchers resist the notion that reasoning could ever evolve out of them. Gary F. Marcus, an NYU psychologist, critiqued the gains of deep learning in several New Yorker essays, to the point that Hinton pushed him to state what a neural net would have to do to impress him. His answer?Read this: “The city councilmen refused to give the demonstrators a license because they feared violence.” Who feared the violence? If a neural net could answer that question, then they’d be on to something.

There’s a deep irony in all this, Sejnowski adds. Deep learning is now one of the most promising tools for exploiting the enormous databases stemming from neuroscience. “We started this thing to understand how the brain works,” he says. “And it turns out the very tools we created, many not very brainlike, are the optimal tools to understand what neuroscience is doing.”

It was a long day in Toronto. At one point during my visit, I noticed that Hinton had a program running on his laptop. Every few seconds, two black-and-white handwritten numbers flashed on screen, randomly overlaid. He was testing a new algorithm, seeing how well it did in detecting the two numbers despite the visual clutter.

Two new numbers appeared. His eyes turned mischievous.

“So what are those two digits?” he asked me.

“Six and a four?”

I was right. The computer was, too. But I was getting tired. My neural nets were misfiring. Another set of numbers flashed.

“How about those?” Hinton said.

“That’s tough. Zero and five?” I said.

“Zero and nine. It got zero and nine. It’s better than you.”

I was wrong. The machine was not.

Paul Voosen is a senior reporter for The Chronicle.

Categories: Uncategorized

Tap, tap. Who’s there? Google Wallet and Softcard!

February 23, 2015 Leave a comment

Tap, tap. Who’s there? Google Wallet and Softcard!

Posted: 2/23/15
GOOGLE  first introduced Google Wallet’s tap and pay feature in 2011, and since then, mobile payments have grown rapidly. You can use the Google Wallet app on Android devices, on any carrier network, to tap and pay anywhere NFC is accepted. Over the years, we’ve received great feedback from people who use this feature and we’ve continued investing to make it easy and secure for more people to pay with their phones. A big part of this is working with other innovators in the industry to help provide a seamless experience across a wide range of phones and stores.So today, we’re excited to announce that we’re working with AT&T Mobility, T-Mobile USA and Verizon Wireless, as well as their mobile payments company Softcard, to help more Android users get the benefits of tap and pay. Under this relationship, the Google Wallet app, including the tap and pay functionality, will come pre-installed on Android phones (running KitKat or higher) sold by these carriers in the US later this year. We’re also acquiring some exciting technology and intellectual property from Softcard to make Google Wallet better.From tap and pay to storing loyalty and gift cards to sending money to friends, we’ve been working hard to make the Google Wallet app even more useful to you — and there’s lots more to come.

Posted by Ariel Bardin, Vice President of Payments

Categories: Uncategorized

Cross-Device Targeting and Measurement Will Impact Digital Display Advertisers in 2015

February 22, 2015 Leave a comment


February 22, 2015

Fast growth in video viewing on desktop, laptop PCs, and more recently on mobile devices, has shifted a significant share of time spent with video to digital venues. To help marketers looking to target and track video viewers across channels, eMarketer has curated a Roundup of our latest coverage on the subject, including statistics, insights and interviews.

Categories: Uncategorized

February 22, 2015 Leave a comment


In this SlideShare, you’ll find 20 predictions from business people with experience in various marketing disciplines – from search advertising and content marketing to product marketing and branding. As you flip through the pages, you’ll find a common theme: today’s customer journey is complex.

We hope that these predictions help you get in the #mobilemindset for 2015.


In this SlideShare, you’ll find 20 predictions from business people with experience in various marketing disciplines – from search advertising and content marketing to product marketing and branding. As you flip through the pages, you’ll find a common theme: today’s customer journey is complex.

We hope that these predictions help you get in the #mobilemindset for 2015.

Categories: Uncategorized

In ILWU-PMA deal, damage is ignored

February 22, 2015 Leave a comment

The short announcement last night that a West Coast labor agreement had been struck spoke loud and clear to why the current system of longshore labor relations is rotten to the core. After subjecting thousands of companies to months of costly delays and disruption, you would think the International Longshore and Warehouse Union and the Pacific Maritime Association would acknowledge the pain they caused and perhaps say something to the effect that efforts will be made to restore trust in West Coast ports. After all, it’s the shippers’ cargo that ultimately pays the bills and there are no shortage of alternative routes for non-discretionary cargo.

Nothing of the kind was said, other than a line that the agreement will be “good for workers and for the industry.” To the extent it was not coincidental that “workers” appeared in the sentence before “industry” – and I’m sure it wasn’t – it lays bare the insular priorities of the negotiating parties, versus the larger economy that depends on well-functioning ports. It was left to port authorities like Oakland to acknowledge the customer and its concerns: “Shippers are looking to us to accelerate the flow of cargo,” Oakland executive director Chris Lytle said in a press release after the settlement was announced. “We owe them our best effort.”

The failure by the negotiating parties to even acknowledge the damage to sales, employment, profits and future business opportunity speaks to how a national economic engine like the West Coast port system is under the control of parties whose first loyalty is to themselves versus the economy. The lack of of an iota of acknowledgement or responsibility also underscores the descent of the PMA-ILWU relationship into bitterness and hostility over the past several months, such that any joint wording about working together for a better collective future was impossible by the time the agreement was grudgingly signed on Friday evening. That was a far cry from the halcyon early days of the negotiations last June and July when the ILWU and PMA on several occasions jointly announced that “both parties have pledged to keep cargo moving.” How distant and empty those words seem now.

And it wasn’t as if others were coming to the aid of shippers either. And as the economic pain emanated outward from the West Coast Washington appeared either oblivious or unconcerned, or both, with the White House only stepping in after the news started making national headlines. That does not bode well for the idea that the process will be meaningfully reformed, such that disruption will be dismantled as a tool of leverage, during the five-year duration of the new agreement.

What we will now see is a significant reaction from importer and exporter companies. Not all cargo can avoid the West Coast — not by a long shot — but unlike 2002 when many C-Suites were blindsided by the 10-day lockout, this time there is complete understanding of the risks and a full realization that, though it may be five years in the future, they will be going through this all over again unless long-term changes in their supply chains are made starting now.

As my colleague Bill Mongelluzzo writes in an analysis today, “importers and exporters, disgusted by months of fruitless contract negotiations, port congestion and public bickering between the ILWU and PMA, will say enough is enough. Retailers and direct shippers in surveys have indicated they will most likely shift some of their cargo volume to East Coast ports.”

Oakland put it well in a Q&A issued after the settlement:

Q: Will it be more of the same at the next negotiation?

A: There’s a history of challenging bargaining over waterfront contracts. The hope is that both sides will recognize the need to settle future contracts without further damaging the economy.

Hope — that’s about all the reassurance shippers can have at this point.

Contact Peter Tirschwell and follow him on Twitter: @petertirschwell.

Categories: Uncategorized

The Future of Retail is the End of Wholesale

February 20, 2015 Leave a comment


E-commerce will rapidly reshape the entire economic model of retail, spelling the end of wholesale, argues Doug Stephens, founder of Retail Prophet.

Burberry Shanghai store | Source: Burberry

TORONTO, Canada — Retail is facing a monumental problem that no one seems to want to talk about. It’s that the entire economic model of revenue and profitability for retailers and the suppliers they do business with is collapsing under its own weight and soon will no longer function.

Part of the problem stems from the continued pervasiveness of online retail. Global e-commerce increased by 19 percent in 2013 alone, a figure that was likely equalled or bettered in 2014. With those sorts of multiples, it’s entirely likely that upwards of 30 percent or more of the total retail economy will be transacted online by 2025.

Our dependence on stores to serve as distribution points for products is rapidly diminishing as digital media, in all forms, becomes remarkably effective at serving our basic shopping and distribution needs which, until recently, could only be fulfilled by physical stores. Now, just about anything we buy can be on our doorstep in a matter of days, if not hours, via a myriad of online shopping options.

The physical store has the potential to be the most powerful and effective form of media available to a brand.

The End of Wholesale

This historic transition raises a few critical questions: How can the financial models for retail revenue and profit, which haven’t changed significantly since the industrial revolution, sustain if the core purpose and definition of a “retail store” itself is being completely reinvented? How can retailers continue to buy products in mass quantity at wholesale, ship them, inventory them, merchandise them, train their staff on them, manage them and attempt to sell them, when the consumer has a growing myriad of options, channels and brands through which to buy those very same products? How many of today’s retailers will simply stand by and watch an ever-increasing percentage of their sales cleave off to an expanding mosaic of online competitors — which, by the way, may include many of their own suppliers who are now selling direct to consumers?

It seems inevitable that retailers will have to define a new model; one better suited to the fragmented market they find themselves in.

The Store As Media

This does not, however, mean an end to physical retail stores but rather a repurposing. Given their innately live, sensorial and experiential quality, physical stores have the potential to become powerful media points from which retailers can articulate their brand story, excite consumers about products and then funnel their purchase to any number of channels, devices and distributors. In fact, as I’ve often argued, the physical store has the potential to be the most powerful and effective form of media available to a brand because it offers an experience, which if crafted properly, cannot be replicated online.

The Experience Is The Product

With all this in mind, I foresee a not-so-distant future where the retailer/vendor relationship will begin to look a lot more like a media buy than the wholesale product purchase agreement of today.

Part media outlet, part sales agent — a new breed of experiential retailers will use their physical stores to perfect the consumer experience across categories of products. They will define the ideal experiential journey, employ expert “product ambassadors” and technology to deliver something truly unique, remarkable and memorable. So memorable in fact, that it leaves a lasting experiential imprint on the shopper. The solitary aim of these new-era retailers will be to drive significant sales for their vendors’ products across multiple channels including, at least to some extent, their own. But unlike stores of today that are single-mindedly focused on keeping sales in-house, stores of the future will position themselves as true omnichannel hubs, serving customers through multiple channels of fulfilment, which will ultimately include their vendors and competitors — yes, even their competitors. Attribution for these sales will matter less than delivering the powerful shopping experience responsible for generating them, regardless of how, when or through whom they occur.

Skids of products and rows of shelving will give way to more gallery-esque store designs and artful merchandising, allowing space for in-store media and interactivity with product. Social media will be infused into the experience offering at-the-shelf reviews, ratings and comparisons of products. The store in essence will become an immersive and experiential advertisement for the products it represents and a direct portal to the entire universe of distribution channels available.

A New Revenue Model 

As for revenue; Retailers who can design and execute these sorts of outstanding customer experiences will likely charge an upfront fee or “card rate” to their product vendors based on the volume of positive exposure they bring to the products they represent in store.

If this seems implausible, consider that just as musicians now makes significantly less from record sales than they do from live performances, so too will great retailers build more of their economic model around delivering a live in-store experience around their products, than relying solely on the margin from individual product sales.

Metrics Beyond Sales 

This new model will, however, require retailers to qualify and quantify the experience they deliver, the traffic they generate and the consequent downstream sales impact they influence. To that end, an array of new technologies will enable a 360-degree understanding of the experience in both stores and the centres in which they sit. Anonymous facial recognition, video analytics, mobile ID tracking, beacon technology, radio frequency identification and other systems will transform stores into living websites. Using these and other technologies, store chains will be able to understand the profile and behaviour of the customers in their spaces and gather new insight into the level of engagement being created and eventually, even its causal impact on downstream purchases. In other words, the ability to understand what kind of customers came into the store, how many were repeat versus unique visitors, where they went within the store, what and with whom they engaged and ultimately what they bought while in the store and even after leaving the store.

New Era. No Rules.

My hope is that retailers accept this historic shift as a call to action – a heads-up that, to invoke Sam Walton, the days of stacking it high and watching it fly, are gone forever. Retailers that succeed in the digital age, will be those that begin now to redefine the value they bring to the equation and dare to defy what is fast becoming old industry math.

Doug Stephens is a retail industry futurist and the founder of Retail Prophet.

The views expressed in Op-Ed pieces are those of the author and do not necessarily reflect the views of The Business of Fashion.

Categories: Uncategorized
%d bloggers like this: