Big data, 3D printing, activity streams, Internet TV, Near Field Communication (NFC) payment, cloud computing and media tablets are some of the fastest-moving technologies identified in Gartner Inc.’s 2012 Hype Cycle for Emerging Technologies.
Gartner analysts said that these technologies have moved noticeably along the Hype Cycle since 2011, while consumerization is now expected to reach the Plateau of Productivity in two to five years, down from five to 10 years in 2011. Bring your own device (BYOD), 3D printing and social analytics are some of the technologies identified at the Peak of Inflated Expectations in this year’s Emerging Technologies Hype Cycle (see Figure 1).
Gartner’s 2012 Hype Cycle Special Report provides strategists and planners with an assessment of the maturity, business benefit and future direction of more than 1,900 technologies, grouped into 92 areas. New Hype Cycles this year include big data, the Internet of Things, in-memory computing and strategic business capabilities.
The Hype Cycle graphic has been used by Gartner since 1995 to highlight the common pattern of overenthusiasm, disillusionment and eventual realism that accompanies each new technology and innovation. The Hype Cycle Special Report is updated annually to track technologies along this cycle and provide guidance on when and where organizations should adopt them for maximum impact and value.
The Hype Cycle for Emerging Technologies report is the longest-running annual Hype Cycle, providing a cross-industry perspective on the technologies and trends that senior executives, CIOs, strategists, innovators, business developers and technology planners should consider in developing emerging-technology portfolios.
“Gartner’s Hype Cycle for Emerging Technologies targets strategic planning, innovation and emerging technology professionals by highlighting a set of technologies that will have broad-ranging impact across the business,” said Jackie Fenn, vice president and Gartner fellow. “It is the broadest aggregate Gartner Hype Cycle, featuring technologies that are the focus of attention because of particularly high levels of hype, or those that Gartner believes have the potential for significant impact.”
“The theme of this year’s Hype Cycle is the concept of ‘tipping points.’ We are at an interesting moment, a time when many of the scenarios we’ve been talking about for a long time are almost becoming reality,” said Hung LeHong, research vice president at Gartner. “The smarter smartphone is a case in point. It’s now possible to look at a smartphone and unlock it via facial recognition, and then talk to it to ask it to find the nearest bank ATM. However, at the same time, we see that the technology is not quite there yet. We might have to remove our glasses for the facial recognition to work, our smartphones don’t always understand us when we speak, and the location-sensing technology sometimes has trouble finding us.”
Source: Gartner (August 2012)
Although the Hype Cycle presents technologies individually, Gartner encourages enterprises to consider the technologies in sets or groupings, because so many new capabilities and trends involve multiple technologies working together. Often, one or two technologies that are not quite ready can limit the true potential of what is possible. Gartner refers to these technologies as “tipping point technologies” because, once they mature, the scenario can come together from a technology perspective.
Some of the more significant scenarios, and the tipping point technologies, need to mature so that enterprises and governments can deliver new value and experiences to customers and citizens include:
Any Channel, Any Device, Anywhere — Bring Your Own Everything
The technology industry has long talked about scenarios in which any service or function is available on any device, at anytime and anywhere. This scenario is being fueled by the consumerization trend that is making it acceptable for enterprise employees to bring their own personal devices into the work environment. The technologies and trends featured on this Hype Cycle that are part of this scenario include BYOD, hosted virtual desktops, HTML5, the various forms of cloud computing, silicon anode batteries and media tablets. Although all these technologies and trends need to mature for the scenario to become the norm, HTML 5, hosted virtual networks and silicon anode batteries are particularly strong tipping point candidates.
A world in which things are smart and connected to the Internet has been in the works for more than a decade. Once connected and made smart, things will help people in every facet of their consumer, citizen and employee lives. There are many enabling technologies and trends required to make this scenario a reality. On the 2012 Hype Cycle, Gartner has included autonomous vehicles, mobile robots, Internet of Things, big data, wireless power, complex-event processing, Internet TV, activity streams, machine-to-machine communication services, mesh networks: sensor, home health monitoring and consumer telematics. The technologies and trends that are the tipping points to success include machine-to-machine communication services, mesh networks: sensor, big data, complex-event processing and activity streams.
Big Data and Global Scale Computing at Small Prices
This broad scenario portrays a world in which analytic insight and computing power are nearly infinite and cost-effectively scalable. Once enterprises gain access to these resources, many improved capabilities are possible, such as better understanding customers or better fraud reduction. The enabling technologies and trends on the 2012 Hype Cycle include quantum computing, the various forms of cloud computing, big data, complex-event processing, social analytics, in-memory database management systems, in-memory analytics, text analytics and predictive analytics. The tipping point technologies that will make this scenario accessible to enterprises, governments and consumers include cloud computing, big data and in-memory database management systems.
The Human Way to Interact With Technology
This scenario describes a world in which people interact a lot more naturally with technology. The technologies on the Hype Cycle that make this possible include human augmentation, volumetric and holographic displays, automatic content recognition, natural-language question answering, speech-to-speech translation, big data, gamification, augmented reality, cloud computing, NFC, gesture control, virtual worlds, biometric authentication methods and speech recognition. Many of these technologies have been “emerging” for multiple years and are starting to become commonplace, however, a few stand out as tipping point technologies including natural-language question answering and NFC.
What Payment Could Really Become
This scenario envisions a cashless world in which every transaction is an electronic one. This will provide enterprises with efficiency and traceability, and consumers with convenience and security. The technologies on the 2012 Hype Cycle that will enable parts of this scenario include NFC payment, mobile over the air (OTA) payment and biometric authentication methods. Related technologies will also impact the payment landscape, albeit more indirectly. These include the Internet of Things, mobile application stores and automatic content recognition. The tipping point will be surpassed when NFC payment and mobile OTA payment technologies mature.
The Voice of the Customer Is on File
Humans are social by nature, which drives a need to share — often publicly. This creates a future in which the “voice of customers” is stored somewhere in the cloud and can be accessed and analyzed to provide better insight into them. The 2012 Hype Cycle features the following enabling technologies and trends: automatic content recognition, crowdsourcing, big data, social analytics, activity streams, cloud computing, audio mining/speech analytics and text analytics. Gartner believes that the tipping point technologies are privacy backlash and big data.
3D Print It at Home
In this scenario, 3D printing allows consumers to print physical objects, such as toys or housewares, at home, just as they print digital photos today. Combined with 3D scanning, it may be possible to scan certain objects with a smartphone and print a near-duplicate. Analysts predict that 3D printing will take more than five years to mature beyond the niche market.
Additional information is available in “Gartner’s Hype Cycle for Emerging Technologies, 2012″ at http://www.gartner.com/hypecycles. The Special Report includes a video in which Ms. Fenn provides more details regarding this year’s Hype Cycles, as well as links to the 92 Hype Cycle reports.
Nanocomputer is the logical name for a computer smaller than the microcomputer, which is smaller than the minicomputer. (The minicomputer is called “mini” because it was a lot smaller than the original (mainframe) computers.) More technically, it is a computer whose fundamental parts are no bigger than a few nanometers. For comparison, the smallest part of current state-of-the-art microprocessors measures 28 nm as of March 24, 2012. No commercially available computers that are named nanocomputers exist at this date, but the term is used in science and science fiction.
There are several ways nanocomputers might be built, using mechanical, electronic, biochemical, or quantum technology. It was argued in 2007 that is unlikely that nanocomputers will be made out of semiconductor transistors (Microelectronic components that are at the core of all modern electronic devices), as they seem to perform significantly less well when shrunk to sizes under 100 nanometers; however, as of 2011 it is projected that 22 nm lithography devices will ship before 2012.
This is based on nano technology and, in fact, one of its major application. Nano components of such machine make it efficient to use.
The history of computer technology has involved a sequence of changes from gears to relays to valves to transistors to integrated circuits and so on. Today’s techniques can fit logic gates and wires a fraction of a micron wide onto a silicon chip. Soon the parts will become smaller and smaller until they are made up of only a handful of atoms. At this point the laws of classical physics break down and the rules of quantum mechanics take over, so the new quantum technology must replace and/or supplement what we presently have. It will support an entirely new kind of computation with new algorithms based on quantum principles.
Presently our digital computers rely on bits, which, when charged, represent on, true, or 1. When not charged they become off, false, or 0. A register of 3 bits can represent at a given moment in time one of eight numbers (000,001,010,…,111). In the quantum state, an atom (one bit) can be in two places at once according to the laws of quantum physics, so 3 atoms (quantum bits or qubits) can represent all eight numbers at any given time. So for x number of qubits, there can be 2x numbers stored. Parallel processing can take place on the 2x input numbers, performing the same task that a classical computer would have to repeat 2x times or use 2x processors working in parallel. In other words a quantum computer offers an enormous gain in the use of computational resources such as time and memory. This becomes mind boggling when you think of what 32 qubits can accomplish.
This all sounds like another purely technological process. Classical computers can do the same computations as quantum computers, only needing more time and more memory. The catch is that they need exponentially more time and memory to match the power of a quantum computer. An exponential increase is really fast, and available time and memory run out very quickly.
Quantum computers can be programed in a qualitatively new way using new algorithms. For example, we can construct new algorithms for solving problems, some of which can turn difficult mathematical problems, such as factorization, into easy ones. The difficulty of factorization of large numbers is the basis for the security of many common methods of encryption. RSA, the most popular public key cryptosystem used to protect electronic bank accounts gets its security from the difficulty of factoring very large numbers. This was one of the first potential uses for a quantum computer.
“Experimental and theoretical research in quantum computation is accelerating world-wide. New technologies for realising quantum computers are being proposed, and new types of quantum computation with various advantages over classical computation are continually being discovered and analysed and we believe some of them will bear technological fruit. From a fundamental standpoint, however, it does not matter how useful quantum computation turns out to be, nor does it matter whether we build the first quantum computer tomorrow, next year or centuries from now. The quantum theory of computation must in any case be an integral part of the world view of anyone who seeks a fundamental understanding of the quantum theory and the processing of information.” ( Center for Quantum Computation)
In 1995 there was a $100 bet made to create the impossible within 16 years, the world’s first nanometer supercomputer. This resulted in the NanoComputer Dream Team, and utilizes the internet to gather talent from every scientific field and from all over the world, amateur and professional. Their deadline: November 1, 2011.
The massive amount of processing power generated by computer manufacturers has not yet been able to quench our thirst for speed and computing capacity. In 1947, American computer engineer Howard Aiken said that just six electronic digital computers would satisfy the computing needs of the United States. Others have made similar errant predictions about the amount of computing power that would support our growing technological needs. Of course, Aiken didn’t count on the large amounts of data generated by scientific research, the proliferation of personal computers or the emergence of the Internet, which have only fueled our need for more, more and more computing power.
As Moore’s Law states, the number of transistors on a microprocessor continues to double every 18 months, the year 2020 or 2030 will find the circuits on a microprocessor measured on an atomic scale. And the logical next step will be to create quantum computers, which will harness the power of atoms and molecules to perform memory and processing tasks. Quantum computers have the potential to perform certain calculations significantly faster than any silicon-based computer.
Scientists have already built basic quantum computers that can perform certain calculations; but a practical quantum computer is still years away. In this article, you’ll learn what a quantum computer is and just what it’ll be used for in the next era of computing.
You don’t have to go back too far to find the origins of quantum computing. While computers have been around for the majority of the 20th century, quantum computing was first theorized less than 30 years ago, by a physicist at the Argonne National Laboratory. Paul Benioff is credited with first applying quantum theory to computers in 1981. Benioff theorized about creating a quantum Turing machine. Most digital computers, like the one are based on the Turing Theory.
The Turing machine, developed by Alan Turing in the 1930s, is a theoretical device that consists of tape of unlimited length that is divided into little squares. Each square can either hold a symbol (1 or 0) or be left blank. A read-write device reads these symbols and blanks, which gives the machine its instructions to perform a certain program. Does this sound familiar? Well, in a quantum Turing machine, the difference is that the tape exists in a quantum state, as does the read-write head. This means that the symbols on the tape can be either 0 or 1 or a superposition of 0 and 1; in other words the symbols are both 0 and 1 (and all points in between) at the same time. While a normal Turing machine can only perform one calculation at a time, a quantum Turing machine can perform many calculations at once.
Today’s computers, like a Turing machine, work by manipulating bits that exist in one of two states: a 0 or a 1. Quantum computers aren’t limited to two states; they encode information as quantum bits, or qubits, which can exist in superposition. Qubits represent atoms, ions, photons or electrons and their respective control devices that are working together to act as computer memory and a processor. Because a quantum computer can contain these multiple states simultaneously, it has the potential to be millions of times more powerful than today’s most powerful supercomputers.
This superposition of qubits is what gives quantum computers their inherent parallelism. According to physicist David Deutsch, this parallelism allows a quantum computer to work on a million computations at once, while your desktop PC works on one. A 30-qubit quantum computer would equal the processing power of a conventional computer that could run at 10 teraflops (trillions of floating-point operations per second). Today’s typical desktop computers run at speeds measured in gigaflops (billions of floating-point operations per second).
Quantum computers also utilize another aspect of quantum mechanics known as entanglement. One problem with the idea of quantum computers is that if you try to look at the subatomic particles, you could bump them, and thereby change their value. If you look at a qubit in superposition to determine its value, the qubit will assume the value of either 0 or 1, but not both (effectively turning your spiffy quantum computer into a mundane digital computer). To make a practical quantum computer, scientists have to devise ways of making measurements indirectly to preserve the system’s integrity. Entanglement provides a potential answer. In quantum physics, if you apply an outside force to two atoms, it can cause them to become entangled, and the second atom can take on the properties of the first atom. So if left alone, an atom will spin in all directions. The instant it is disturbed it chooses one spin, or one value; and at the same time, the second entangled atom will choose an opposite spin, or value. This allows scientists to know the value of the qubits without actually looking at them.
Some recent advancements in the field of quantum computing.
So-called quantum computers are designed to quickly crunch numbers that would take a person a lifetime or longer—for instance, mapping trillions of amino acids for futuristic drug cures or making sense of the avalanche of public data we create daily. So what can you get by putting one to use for your company, as Lockheed Martin (LMT) has since it bought the world’s first corporate model from D-Wave Systems in 2011? (A few weeks ago, Google (GOOG) bought the second.) The aerospace and security giant has been operating its device at the University of Southern California’s Quantum Computation Center for the past 18 months.
Quantum computing uses the quantum nature of matter, the atoms themselves, as computing devices. Normal computer architecture is based on the bit—represented either as a one or a zero. The quantum computer is programmed so that the input is initially both zero and one.
Computer scientists control the microscopic particles that act as qubits in quantum computers by using control devices.
- Ion traps use optical or magnetic fields (or a combination of both) to trap ions.
- Optical traps use light waves to trap and control particles.
- Quantum dots are made of semiconductor material and are used to contain and manipulate electrons.
- Semiconductor impurities contain electrons by using “unwanted” atoms found in semiconductor material.
- Superconducting circuits allow electrons to flow with almost no resistance at very low temperatures.
Today’s Quantum Computers
Quantum computers could one day replace silicon chips, just like the transistor once replaced the vacuum tube. But for now, the technology required to develop such a quantum computer is beyond our reach. Most research in quantum computing is still very theoretical.
The most advanced quantum computers have not gone beyond manipulating more than 16 qubits, meaning that they are a far cry from practical application. However, the potential remains that quantum computers one day could perform, quickly and easily, calculations that are incredibly time-consuming on conventional computers. Several key advancements have been made in quantum computing in the last few years. Let’s look at a few of the quantum computers that have been developed.
Los Alamos and MIT researchers managed to spread a single qubit across three nuclear spins in each molecule of a liquid solution of alanine (an amino acid used to analyze quantum state decay) or trichloroethylene (a chlorinated hydrocarbon used for quantum error correction) molecules. Spreading out the qubit made it harder to corrupt, allowing researchers to use entanglement to study interactions between states as an indirect method for analyzing the quantum information.
In March, scientists at Los Alamos National Laboratory announced the development of a 7-qubit quantum computer within a single drop of liquid. The quantum computer uses nuclear magnetic resonance (NMR) to manipulate particles in the atomic nuclei of molecules of trans-crotonic acid, a simple fluid consisting of molecules made up of six hydrogen and four carbon atoms. The NMR is used to apply electromagnetic pulses, which force the particles to line up. These particles in positions parallel or counter to the magnetic field allow the quantum computer to mimic the information-encoding of bits in digital computers.
Researchers at IBM-Almaden Research Center developed what they claimed was the most advanced quantum computer to date in August. The 5-qubit quantum computer was designed to allow the nuclei of five fluorine atoms to interact with each other as qubits, be programmed by radio frequency pulses and be detected by NMR instruments similar to those used in hospitals (see How Magnetic Resonance Imaging Works for details). Led by Dr. Isaac Chuang, the IBM team was able to solve in one step a mathematical problem that would take conventional computers repeated cycles. The problem, called order-finding, involves finding the period of a particular function, a typical aspect of many mathematical problems involved in cryptography.
Scientists from IBM and Stanford University successfully demonstrated Shor’s Algorithm on a quantum computer. Shor’s Algorithm is a method for finding the prime factors of numbers (which plays an intrinsic role in cryptography). They used a 7-qubit computer to find the factors of 15. The computer correctly deduced that the prime factors were 3 and 5.
The Institute of Quantum Optics and Quantum Information at the University of Innsbruck announced that scientists had created the first qubyte, or series of 8 qubits, using ion traps.
Scientists in Waterloo and Massachusetts devised methods for quantum control on a 12-qubit system. Quantum control becomes more complex as systems employ more qubits.
Canadian startup company D-Wave demonstrated a 16-qubit quantum computer. The computer solved a sudoku puzzle and other pattern matching problems. The company claims it will produce practical systems by 2008. Skeptics believe practical quantum computers are still decades away, that the system D-Wave has created isn’t scaleable, and that many of the claims on D-Wave’s Web site are simply impossible (or at least impossible to know for certain given our understanding of quantum mechanics).
If functional quantum computers can be built, they will be valuable in factoring large numbers, and therefore extremely useful for decoding and encoding secret information. If one were to be built today, no information on the Internet would be safe. Our current methods of encryption are simple compared to the complicated methods possible in quantum computers. Quantum computers could also be used to search large databases in a fraction of the time that it would take a conventional computer. Other applications could include using quantum computers to study quantum mechanics, or even to design other quantum computers.
But quantum computing is still in its early stages of development, and many computer scientists believe the technology needed to create a practical quantum computer is years away. Quantum computers must have at least several dozen qubits to be able to solve real-world problems, and thus serve as a viable computing method.
Gen Y called Golden generation – Traits
a) Cannot have enough freedom at work
b) Constantly seeking time and space from seniors
They mine for information from sources,
c) Vociferous displaying IQ and Abilities.,
Qualities includes impatient, experiment driven, inquisitive, enthusiastic, technology – savvy, self aware, individualistic.
They should be occupied with career challenges. We need to treat them equals, no command and control works for them. We need to have transparent conversations. we need to be consistent with the work expectations..
Be one up on knowledge to get respect.
constantly re- skill – ing and re-tooling themselves over their jobs.
They identify hobbies for them to delve in – eg : writing, music, reading etc. Find their groove in the hobby areas and develop on them.
d) Wants quick rewards, provides quick solutions, demands quick solutions
e)Wants to have lofty and un-realistic goals
f) Are high on energy, strong in self beliefs and faith in themselves
g) Want work – life balance from day 1. They are for sabaticals for 2 months in 12 months in a year.
h) Career growth, is critical, money is given
i) Want to control their life even in work, they are not for apology, sometimes they brag about their abilities. Their engagement is assured.
Cloud computing can be defined as a pool of virtualised computing resources that allows users to gain access to applications and data in a web-based environment on demand. This post explains the various cloud architecture and usage models that exist and some of the benefits in using cloud services. It seeks to contribute to a better understanding of the emerging threat landscape created by cloud computing, with a view to identifying avenues for risk reduction. Three avenues for action are identified, in particular, the need for a culture of cyber security to be created through the development of effective public-private partnerships; the need for privacy regime to be reformed to deal with the issues created by cloud computing and the need for cyber-security researchers to find ways in which to mitigate existing and new security risks in the cloud computing environment.
Cloud computing is now firmly established in the information technology landscape and its security risks need to be mapped and addressed at this critical stage in its development.
A computer’s operating system, applications and data are typically installed and stored in the ‘traditional’ computing environment. In a cloud computing environment, individuals and businesses work with applications and data stored and/or maintained on shared machines in a web-based environment rather than physically located in the home of a user or a corporate environment. Lew Tucker, Vice President and Chief Technology Officer of Cloud Computing at Sun Microsystems, explained that cloud computing is ‘the movement of application services onto the Internet and the increased use of the Internet to access a wide variety of services traditionally originating from within a company’s data center’ (Creeger 2009: 52). For example, web-based applications such as Google’s Gmail™ can be accessed in real time from an Internet-connected machine anywhere in the world.
Use of cloud services creates a growing interdependence among both public and private sector entities and the individuals served by these entities. This post provides a snapshot of risk areas specific to cloud services and those that apply more generally in an online environment which clients of cloud service providers should be aware of.
It is not clear when the term cloud computing was first coined. For example, Bartholomew (2009), Bogatin (2006) and several others suggested that ‘cloud computing’ terminology was, perhaps, first coined by Google™ Chief Executive Eric Schmidt in 2006. Kaufman (2009: 61) suggests that cloud computing terminology ‘originates from the telecommunications world of the 1990s, when providers began using virtual private network (VPN) services for data communication’. Desisto, Plummer and Smith (2008: 1) state that ‘[t]he first SaaS [Software as a Service] offerings were delivered in the late 1990s…[a]lthough these offerings weren’t called cloud computing’. There is, however, agreement on the definition of cloud computing.
The National Institute of Standards and Technology defines cloud computing as
a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (eg networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction (Mell 2009: 9).
Architectures and deployment models
Cloud architectures can be broadly categorised into:
Infrastructure as a Service (IaaS) is the foundation of cloud services. It provides clients with access to server hardware, storage, bandwidth and other fundamental computing resources. For example, Amazon EC2 allows individuals and businesses to rent machines preconfigured with selected operating systems on which to run their own applications.
Platform as a Service (PaaS) builds upon IaaS and provides clients with access to the basic operating software and optional services to develop and use software applications (eg database access and payment service) without the need to buy and manage the underlying computing infrastructure. For example, Google App Engine allows clients to run their web applications (ie software that can be accessed using a web browser such as Internet Explorer over the internet) on Google’s infrastructure.
Software as a Service (SaaS), builds upon the underlying IaaS and PaaS provides clients with integrated access to software applications. For example, Oracle SaaS Platform allows independent software vendors to build, deploy and manage SaaS and cloud-based applications using a licensing economic model. Here, users purchase a license and support for components of the Oracle SaaS Platform on a monthly basis.
Cloud services can be used in a private, public, community/managed or hybrid setting (Cloud Security Alliance 2009). Privately-hosted cloud services are generally considered a safer but more costly option than services using a shared-tenancy setting (ie data from different clients stored on a single physical machine). In line with this, the US Government recently announced an initiative ‘to offer cloud-based services that are hosted in private data centers and which could be used to handle more sensitive data’ (McMillan 2009: np).
In a community/managed setting, tenancy can either be single (dedicated) or shared and the IT infrastructure is either managed by the organisation or a third-party cloud service provider. The main difference between hybrid cloud services and other cloud services is that the former ‘is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability’ (Mell & Grance 2009: 13).
Cloud computing provides a scalable online environment which facilitates the ability to handle an increased volume of work without impacting on the performance of the system. Cloud computing also offers significant computing capability and economy of scale that might not otherwise be affordable to businesses, especially small and medium enterprises (SMEs) that may not have the financial and human resources to invest in IT infrastructure. Advantages include:
- Capital costs—SMEs can provide unique services using large-scale resources from cloud service providers and ‘add or remove capacity from their IT infrastructure to meet peak or fluctuating service demands while paying only for the actual capacity used’ (Sotomayor et al. 2009: 14) on a ‘pay-as-you-go’ economic model.
- Running costs—it can also be significantly cheaper to rent added server space for a few hours at a time rather than maintain proprietary servers. Rental prices for Amazon Elastic Compute Cloud (EC2), for example, are between US$0.10–1.00 an hour. Businesses do not have to worry about upgrading their resources whenever a new version of the application is available. Businesses can also base their services in the data centres of bigger enterprises or host their IT infrastructure in locations offering the lowest cost.
Advantages of using cloud services can also go beyond cost savings as cloud computing allows clients to:
- Avoid the expense and time-consuming task of installing and maintaining hardware infrastructure and software applications; and
- Allow for the rapid provisioning and use of services to clients by optimising their IT infrastructure (Lewin 2009).
External hosting of applications and storage also ensures redundancy and business continuity in the event of a site failure.
Service level agreements
To ensure guarantees from cloud service providers for service delivery, businesses using cloud computing services typically enter into service level agreements (SLAs) with the cloud service providers. Although SLAs vary between businesses and cloud service providers, they typically include the required/agreed service level through quality of service parameters, the level of service availability, the indication of the security measures adopted by the cloud service provider and the rates of the services.
Cloud computing risks
Attacks targeting shared-tenancy environment
A virtual machine (VM) is the software implementation of a computer that runs its own operating system and application as if it was a physical machine (VMWare 2009). Multiple VMs can concurrently run different software applications on different operating system environments on a single physical machine. This reduces hardware costs and space requirements.
In a shared-tenancy cloud computing environment, data from different clients can be hosted on separate VMs but reside on a single physical machine. This provides maximum flexibility. Software applications running in one VM should not be able to impact or influence software running in another VM. An individual VM should be unaware of the other VMs running in the environment as all actions are confined to its own address space.
In a recent study, a team of computer scientists from the University of California, San Diego and Massachusetts Institute of Technology examined the widely-used Amazon EC2 services. They found that ‘it is possible to map the internal cloud infrastructure, identify where a particular target VM is likely to reside, and then instantiate new VMs until one is placed co-resident with the target’ (Ristenpart et al. 2009: 199). This demonstrated that the research team were able to load their eavesdropping software onto the same servers hosting targeted websites (Hardesty 2009). By identifying the target VMs, attackers can potentially monitor the cache (a small allotment of high-speed memory used to store frequently-used information) in order to steal data hosted on the same physical machine (Hardesty 2009). Such an attack is also known as a side-channel attack.
The findings from this research may only be a proof-of-concept at this stage, but it raises concerns about the possibility of cloud computing servers being a central point of vulnerability that can be criminally exploited. The Cloud Security Alliance, for example, listed this as one of the top threats to cloud computing.
Attacks have surfaced in recent years that target the shared technology inside Cloud Computing environments. Disk partitions, CPU caches, GPUs, and other shared elements were never designed for strong compartmentalization. As a result, attackers focus on how to impact the operations of other cloud customers, and how to gain unauthorized access to data. (Cloud Security Alliance 2010: 11)
Vulnerabilities in VMs can be exploited by malicious code (malware) such as VM-based rootkits designed to infect both client and server machines in cloud services. Rootkits are cloaking technologies usually employed by other malware programs to abuse compromised systems by hiding files, registry keys and other operating system objects from diagnostic, antivirus and security programs. For example, in April 2009, a security researcher pointed out how a critical vulnerability in VMware’s VM display function could be exploited to run malware, which allows an attacker ‘to read and write memory on the ”host” operating system [OS]’ (Keizer 2009: np).
VM-based rootkits, as pointed out by Price (2008: 27), could be used by attackers to ‘gain complete control of the underlying OS without the compromised OS being aware of their existence…[and] are especially dangerous because they also control all hardware interfaces. Once the VM-based rootkits are installed on the machine, they can “view keystrokes, network packets, disk state, and memory state, while the compromised OS remains oblivious”’.
Bot malware typically takes advantage of system vulnerabilities and software bugs or hacker-installed backdoors that allow malicious code to be installed on machines without the owners’ consent or knowledge. They then load themselves into computers often for nefarious purposes. Machines infected with bot malware are then turned into ‘zombies’ and can be used as remote attack tools or to form part of a botnet under the control of the botnet controller.
Zombies are compromised machines waiting to be activated by their command and control (C&C) servers. The C&C servers are often machines that have been compromised and arranged in a distributed structure to limit traceability.
Cybercriminals could potentially abuse cloud services to operate C&C servers to carry out distributed denial-of-service (DDoS) attacks, which are attacks from multiple sources targeting specific websites by flooding a web server with repeated messages, tying up the system and denying access to legitimate users, as well as other cyber criminal activities. In December 2009, for example, a ‘new wave of a Zeus bot (Zbot) variant was spotted taking advantage of Amazon EC2’s cloud-based services for its C&C…functionalities’ (Ferrer 2009: np).
Launch pad for brute force and other attacks
There have also been suggestions that the virtualised infrastructure can be used as a launching pad for new attacks. A security consultant recently suggested that it may be possible to abuse cloud computing services to launch a brute force attack (a strategy used to break encrypted data by trying all possible decryption key or password combinations) on various types of passwords. Using Amazon EC2 as an example, the consultant estimated that based on the ‘hourly fees Amazon charges for its EC2 web service, it would cost more than [US]$1.5m to brute force a 12-character password containing nothing more than lower-case letters a through z…[but] an 11-character code costs less than [US]$60,000 to crack, and a 10-letter phrase costs less than [US]$2,300’ (Goodin 2009: np).
Although it is still relatively expensive to perform brute force online password-guessing attacks (also known as online dictionary attacks), this could have broad implications for systems using password-based authentication. It may not take long for attackers to design a more practical and cheaper mechanism that exploits cloud services as a launch pad for other attacks, a threat also identified by the Cloud Security Alliance (2010: 8):
Future areas of concern include password and key cracking, DDOS, launching dynamic attack points, hosting malicious data, botnet command and control, building rainbow tables, and CAPTCHA solving farms.
Data availability (business continuity)
A major risk to business continuity in the cloud computing environment is loss of internet connectivity (that could occur in a range of circumstances such as natural disasters) as businesses are dependent on the internet access to their corporate information. In addition, if vulnerability is identified in a particular service provide by the cloud service provider, the business may have to terminate all access to the cloud service provider until they could be assured that the vulnerability has been rectified.
There are also concerns that the seizure of a data-hosting server by law enforcement agencies may result in the unnecessary interruption or cessation of unrelated services whose data is stored on the same physical machine.
In a recent example, ‘FBI agents [reportedly] seized computers from a data center at 2323 Bryan Street in Dallas, Texas, attempting to gather evidence in an ongoing investigation of two men and their various companies accused of defrauding AT&T and Verizon for more than US$6 million’ (Lemos 2009: np). This resulted in the unintended consequence of disrupting the continuity of businesses whose data and information are hosted on the seized hardware.
[For] LiquidMotors, a company that provides inventory management to car dealers, the servers held its client data and hosted its managed inventory services. The FBI seizure of the servers in the data center rack effectively shut down the company, which filed a lawsuit against the FBI the same day to get the data back (Lemos 2009: np)
While the above example may be an isolated case, it raised concerns about unauthorised access to seized data not related to the warrant, which can result in the unintended disclosure of data to unwanted parties, particularly in authoritarian countries.
There had been a number of reported incidents of cloud services being taken offline due to DDoS attacks (see Metz 2009). Although DDoS attacks already existed, the cloud computing environment is a new attack sector that may have a more widespread impact on internet users.
The security measures adopted by different cloud service providers varies. If ‘a cybercriminal can identify the [cloud service] provider whose vulnerabilities are the easiest to exploit, then this entity becomes a highly visible target. The lack of security associated with this single entity threatens the entire cloud in which it resides’ (Kaufman 2009: 63).
Just like entrepreneurs, cybercriminals and organised crime groups are always on the lookout for new markets and with the rise of cloud computing, a new sector for exploitation now exists. Rogue cloud service providers based in jurisdictions with lax cybercrime legislation can provide confidential hosting and data storage services for a usually steep fee. Such services could potentially be abused by organised crime groups to store and distribute criminal data (eg child abuse materials for commercial purposes) to avoid the scrutiny of law enforcement agencies.
Hosting confidential business data with cloud service providers involves the transfer of a considerable amount of management control to cloud service providers that usually results in diminished control over security arrangements. There is the risk of rogue providers mining the data for secondary uses such as marketing and reselling the mined data to other businesses. A June 2009 email survey of 220 decision-makers in US organisations with more than 1,000 employees highlighted similar concerns. In the survey, 40.5 percent of the respondents agreed/strongly agreed that ‘[t]he trend toward using SaaS and cloud computing solutions in the enterprise seriously increases the risk of data leakage’ (Proofpoint 2009: 24).
Unfortunately, clients (especially SMEs) are often less aware of the risks and may not have an easy way of determining whether a particular cloud service provider is trustworthy. Tim Watson, head of the computer forensics and security group at De Montfort University remarked that ‘one provider may offer a wonderfully secure service and another may not, if the latter charges half the price, the majority of organisations will opt for it as they have no real way of telling the difference’ (Everett 2009: 7).
Other potential risks
There is increasing pressure for nation-states to develop cyber-offensive capabilities. The next wave of cyber-security threats could potentially be targeted attacks aimed at specific government agencies and organisations, or individuals within enterprises including cloud service providers. For example, Google and several Gmail accounts belonging to Chinese and Tibetan activists have reportedly been targeted (Google 2010; Helft & Markoff 2010).
Foreign intelligence services and industrial spies may not disrupt the normal functioning of an information system as they are mainly interested in obtaining information relevant to vital national or corporate interests. They do so through clandestine entry into computer systems and networks as part of their information-gathering activities.
Cloud service providers may be compelled to scan or search data of interest to ‘national security’ and to report on, or monitor, particular types of transactional data as these data may be subject to the laws of the jurisdiction in which the physical machine is located (Gellman 2009). In addition, overseas cloud service providers may not be legally obliged to notify the clients (owners of the data) about such requests.
Regulation and governance
Some cloud service providers argue that such jurisdictional issues may be capable of resolution contractually via SLAs and the like. Clients using cloud services could include clauses in their SLAs that indicate the law governing the SLA, the choice of the competent court in case of disputes arising from the interpretation and the execution of the contract. The Cloud Security Alliance (2009: 28) also suggested that clients of cloud services should require their providers ‘to deliver a comprehensive list of the regulations and statutes that govern the site and associated services and how compliance with these items is executed’.
Businesses should ensure that SLAs and other legally-binding contractual arrangements with cloud service providers comply with applicable regulatory obligations (eg privacy laws) and industry standards, as they may be liable for breaching these regulations even when the data being breached is held or processed by the cloud service provider.
Determining the law of the jurisdiction in which the SLA is held is an important issue. It may not, however, be as simple as examining the contractual laws that govern operations of cloud service providers to determine which jurisdiction’s laws apply in any particular case. Gellman (2009: 19) pointed out that ‘[t]he user may be unaware of the existence of a second-degree provider or the actual location of the user’s data…[and] it may be impossible for a casual user to know in advance or with certainty which jurisdiction’s law actually applies to information entrusted to a cloud provider’.
Businesses should continue to conduct due diligence on cloud service providers, have a comprehensive compliance framework and ensure that protocols are in place to continuously monitor and manage cloud service providers, offshore vendors and their associated outsourcing relationships. This would ensure businesses have a detailed understanding of the data storage information to maintain some degree of oversight and ensure that an acceptable authentication and access mechanism in place to meet their privacy and confidentiality needs.
The way forward-Culture of security
Vulnerabilities in a particular cloud service or cloud computing environment can potentially be exploited by criminals and actors with malicious intent. However, no single public or private sector entity ‘owns’ the issue of cyber security. There is, arguably, a need to take a broader view and promote transparency and confidence building between cloud service providers, businesses and government agencies using cloud services as well as between government and law enforcement agencies.
In addition, an effective cyber-security policy should be comprehensive and encompass all (public and private sector) entities. The public and private sectors should continue to work together to:
- Identify and prioritise current and emerging risk areas;
- Develop and validate effective measures and mitigation controls. This would involve establishing a standard that mandates certain minimum requirements to ensure an adequate level of electronic information exchange security; and
- Ensure that these strategies are implemented and updated at the respective level.
It is reasonable to assume that higher levels of security can only be achieved at higher marginal costs. To encourage a culture of security, governments could incubate and create market incentives for cloud service providers to integrate security into the software and hardware and system development life cycle. An improved level and type of security is likely to increase the marginal cost of security violations, which in turn will reduce the marginal benefits of cybercrime.
An example is to create an environment conducive for cloud service providers to achieve marketing and competitive advantages if they offer products and services with higher levels and more innovative types of security to assist in combating cyber exploitation. This could be accomplished through government tenders. Dealing with insider threats should also be incorporated into the software/hardware and system development life cycle.
Cloud computing security – Insider Threats
Cloud computing security is ripe with new opportunities for future research, including cloud-related insider threats.
we do not believe the nature of the insider will change due to cloud computing’s impact, but the opportunities for attacks will broaden. Researchers should take note of these new opportunities and respond
accordingly to prevent, detect, and respond to new cloud- related insider attacks. Some important future research topics are:
• Socio-technical approach to insider threats
• Predictive models
• Identifying cloud-based indicators
• Virtualization and hypervisors
• Awareness and reporting
• Normal user behavior analysis
• Policy integration
Geoinformatics, also: Geographic information science (GIS) and Geographic information technology (GIT), is the science and the technology which develops and uses information science infrastructure to address the problems of geography, geosciences and related branches of engineering.
Geoinformatics has been described as “the science and technology dealing with the structure and character of spatial information, its capture, its classification and qualification, its storage, processing, portrayal and dissemination, including the infrastructure necessary to secure optimal use of this information”or “the art, science or technology dealing with the acquisition, storage, processing production, presentation and dissemination of geoinformation”.
Geomatics is a similarly used term which encompasses geoinformatics, but geomatics focuses more so on surveying. Geoinformatics has at its core the technologies supporting the processes of acquiring, analyzing and visualizing spatial data. Both geomatics and geoinformatics include and rely heavily upon the theory and practical implications of geodesy.
Geography and earth science increasingly rely on digital spatial data acquired from remotely sensed images analyzed by geographical information systems (GIS) and visualized on paper or the computer screen.
Geoinformatics combines geospatial analysis and modeling, development of geospatial databases, information systems design, human-computer interaction and both wired and wireless networking technologies. Geoinformatics uses geocomputation and geovisualization for analyzing geoinformation.
Branches of geoinformatics include:
- Geographic Information Systems
- Global Navigation Satellite Systems
- Remote sensing
- Web mapping
Many fields benefit from geoinformatics, including urban planning and land use management, in-car navigation systems, virtual globes, public health, local and national gazetteer management, environmental modeling and analysis, military, transport network planning and management, agriculture, meteorology and climate change, oceanography and coupled ocean and atmosphere modelling, business location planning, architecture and archeological reconstruction, telecommunications, criminology and crime simulation, aviation and maritime transport. The importance of the spatial dimension in assessing, monitoring and modelling various issues and problems related to sustainable management of natural resources is recognized all over the world. Geoinformatics becomes very important technology to decision-makers across a wide range of disciplines, industries, commercial sector, environmental agencies, local and national government, research, and academia, national survey and mapping organisations, International organisations, United Nations, emergency services, public health and epidemiology, crime mapping, transportation and infrastructure, information technology industries, GIS consulting firms, environmental management agencies), tourist industry, utility companies, market analysis and e-commerce, mineral exploration, etc. Many government and non government agencies started to use the spatial data for managing their day to day activities.
A geo-fence is a virtual perimeter for a real-world geographic area..
A geo-fence could be dynamically generated—as in a radius around a store or point location. Or a geo-fence can be a predefined set of boundaries, like school attendance zones or neighborhood boundaries. Custom-digitized geofences are also in use.
When the location-aware device of a location-based service (LBS) user enters or exits a geo-fence, the device receives a generated notification. This notification might contain information about the location of the device. The geofence notice might be sent to a mobile telephone or an email account.
Geofencing, used with child location services, can notify parents when a child leaves a designated area
Geofencing is a critical element to telematics hardware and software. It allows users of the system to draw zones around places of work, customers sites and secure areas. These geo-fences when crossed by an equipped vehicle or person can trigger a warning to the user or operator via SMS or Email.
Other applications include sending an alert if a vehicle is stolen and notifying rangers when wildlife stray into farmland.
Geofencing in a security strategy model provides security to wireless local area networks. This is done by using predefined borders, e.g., an office space with borders established by positioning technology attached to a specially programmed server. The office space becomes an authorized location for designated users and wireless mobile devices.
The Global Positioning System (GPS) is a space-based satellite navigation system that provides location and time information in all weather, anywhere on or near the Earth, where there is an unobstructed line of sight to four or more GPS satellites. It is maintained by the United States government and is freely accessible to anyone with a GPS receiver.
The GPS program provides critical capabilities to military, civil and commercial users around the world. In addition, GPS is the backbone for modernizing the global air traffic system.
The GPS project was developed in 1973 to overcome the limitations of previous navigation systems, integrating ideas from several predecessors, including a number of classified engineering design studies from the 1960s. GPS was created and realized by the U.S. Department of Defense (DoD) and was originally run with 24 satellites. It became fully operational in 1994.
Advances in technology and new demands on the existing system have now led to efforts to modernize the GPS system and implement the next generation of GPS III satellites and Next Generation Operational Control System (OCX). Announcements from the Vice President and the White House in 1998 initiated these changes. In 2000, U.S. Congress authorized the modernization effort, referred to as GPS III.
In addition to GPS, other systems are in use or under development. The Russian GLObal NAvigation Satellite System (GLONASS) was in use by only the Russian military, until it was made fully available to civilians in 2007. There are also the planned European Union Galileo positioning system, Chinese Compass navigation system, and Indian Regional Navigational Satellite System
Geotagging (also written as GeoTagging) is the process of adding geographical identification metadata to various media such as a geotagged photograph or video, websites, SMS messages, QR Codes or RSS feeds and is a form of geospatial metadata. This data usually consists of latitude and longitude coordinates, though they can also include altitude, bearing, distance, accuracy data, and place names.
Geotagging can help users find a wide variety of location-specific information. For instance, one can find images taken near a given location by entering latitude and longitude coordinates into a suitable image search engine. Geotagging-enabled information services can also potentially be used to find location-based news, websites, or other resources. Geotagging can tell users the location of the content of a given picture or other media or the point of view, and conversely on some media platforms show media relevant to a given location.
The related term geocoding refers to the process of taking non-coordinate based geographical identifiers, such as a street address, and finding associated geographic coordinates (or vice versa for reverse geocoding). Such techniques can be used together with geotagging to provide alternative search techniques.
Geocaching is an outdoor sporting activity in which the participants use a Global Positioning System (GPS) receiver or mobile device and other navigational techniques to hide and seek containers, called “geocaches” or “caches”, anywhere in the world.
A typical cache is a small waterproof container containing a logbook where the geocacher enters the date they found it and signs it with their established code name. Larger containers such as plastic storage containers (Tupperware or similar) or ammunition boxes can also contain items for trading, usually toys or trinkets of little value. Geocaching shares many aspects with benchmarking, trigpointing, orienteering, treasure-hunting, letterboxing, and waymarking.
Geocaches are currently placed in over 200 countries around the world and on all seven continents, including Antarctica, and the International Space Station. After more than 12 years of activity there are over 1.7 million active geocaches published on various websites. There are over 5 million geocachers worldwide.
WEB Feature services
The Open Geospatial Consortium Web Feature Service Interface Standard (WFS) provides an interface allowing requests for geographical features across the web using platform-independent calls. One can think of geographical features as the “source code” behind a map, whereas the WMS interface or online mapping portals like Google Maps return only an image, which end-users cannot edit or spatially analyze. The XML-based GML furnishes the default payload-encoding for transporting the geographic features, but other formats like shapefiles can also serve for transport. In early 2006, the OGC members approved the OpenGIS GML Simple Features Profile. This profile is designed to both increase interoperability between WFS servers and to improve the ease of implementation of the WFS standard.
The OGC membership defined and maintains the WFS specification. There are numerous commercial and open source implementations of the WFS interface standard, including an open source reference implementation, called GeoServer. A comprehensive list of WFS implementations can be found at the OGC Implemention
web Map service
A Web Map Service (WMS) is a standard protocol for serving georeferenced map images over the Internet that are generated by a map server using data from a GIS database. The specification was developed and first published by the Open Geospatial Consortium in 1999.
Farmville buffs zealously work their fields, buying seeds and equipment, harvesting and selling crops – all in an obsessive bid to earn points, win badges and rise to higher and higher levels. What has got them hooked to posting needed items, giving plants or animals and adding neighbors is the heart-warming recognition through the barrage of ‘likes’ and ‘comments’ that follow! In fact, the stupendous success of Facebook has proved that people simply love to share what’s going on in both their personal and professional lives across their social networks.
So, why not move these motivational techniques to an organizational platform to offer real-time recognition to employees 365 days a year, even on the go! Everyone wants to be appreciated for their contributions and achievements. So much so that an international strategic group, ‘Recognition Council’ has been formed to provide an awareness of how recognition and rewards, in their many forms, are part of an effective strategy for achieving better business performance! The traditional reward schemes and pay-for-performance incentives are no longer sufficient. The new generation craves constant feedback and praise, that too in a highly visible format.
Social recognition steps in as the new currency that fosters employee engagement, who cares what people think? Turns out … just about everybody. People love to be recognized, but they like it even better when they can share it throughout their company. That’s why we believe in the power of Social Employee Recognition – giving employees the power to share all of their great work company-wide though various features such as Live Recognition. There’s more! Employees can also share the good work they are doing on their external social networks as well such as Facebook, LinkedIn and Twitter. Give people a chance to look like rock stars in front of their friends and make them envious of the cool place where they work. It’s more than just an ego boost for people – it’s a shortcut to employee engagement. motivation and retention. This subtle performance management works by adding a social interface to employee appreciation efforts that showcases good work in a public forum.
New technology tools of social media and mobile applications bring informal, immediate, frequent, interactive and visible-to-all element to employee recognition. Now employee efforts, inputs, skills, knowledge and accomplishments can easily be celebrated in the open. In short, move the offline pat-on-the-back to an online realm!
Simple posts like ‘Great job’, ‘Well done’ or even a ‘Thank you for your efforts’ on Facebook, Twitter, LinkedIn or more private company intranets is all that employees need.
The subsequent social feedback in the form of likes, comments, retweets and shares magnify the value of appreciation.
TPG Software topped an all-India survey of ‘Top Corporate Organizations for Best Practices in Rewards and Recognition’ by Edenred, a loyalty solution organization, in association with the Great Place to Work Institute. The highlight of TPG’s rewards program is that managers personally congratulate people for a good job by writing personal notes about good performance on the company intranet as well as social and professional networks such as Facebook and LinkedIn.
American Express which ranked third, uses ‘RewardBlue’ – an internal intranet to deliver congratulatory messages to well-performing employees.
When employees feel that their efforts do matter and the effect is amplified with the congratulatory notes and approvals all around, the validation affects their self-perception and identity within the organization. The public recognition forges deeper connections, making them more supportive, loyal and eager to improve performance.
The ‘Social Recognition and Employees’ Organizational Support’ research thesis studied over 900 employees in service organisations. It surmises that social recognition contributes to increased self-respect, which means that employees make a greater effort to act in the company’s best interests.
Broadcasting day-to-day employee recognition stories that would have otherwise gone untold, not only encourages the appreciated behavior in the recipient, but also inspires other ‘viewers’ to achieve the exemplified behaviors that drive appreciation and success. The organization also gets an opportunity to epitomize the activities that are expected/liked/valued while reinforcing the desired corporate culture. The elated recipients will in turn tell the world (on social networking sites again) how their company recognizes good work and is a great place to work!
There is an added element of peer-to-peer recognition apart from the customary top-down method as literally anyone can appreciate anyone. This can be used by management in both performance reviews and to identify key talent as well. Eric Mosley, CEO of Globoforce, an employee recognition solutions provider, points out in a Harvard Business Review article, “Employees better understand what performance is desired on an on-going basis while managers can see first-hand an employee’s true performance, behaviors and influence.”
“The bigger business impact of social software in the HR context will be on the organization culture. Social software can alter the organizational fabric and culture, creating a more open and collaborative work environment,” observes Jeffrey Mann, Vice President at Gartner Research. It is easy; it is fast and even safe and secure when used on internal networking sites. The impact is astounding, the value is extraordinary and that too at a nominal cost of the internal technology platform. Apart from this, this innovative recognition is completely non-monetary!
Recognition Council research reveals, “While some incentive clients are ready to use public tools like Facebook and LinkedIn, many prefer to keep programmes behind the firewall; using mechanisms that often resemble Facebook and other social media venues.” Top companies are reaping the rewards by turning recognition into a business asset.
Symantec reported a 16% increase in employee engagement in less than year and KPMG a solid 165%. DHL, Discovery Channel and P&G are also using this medium very successfully. “What sets social recognition apart, is that it is unexpected. Whilst the overall monetary output from the organisation is relatively low, the impact is huge…employees don’t focus on the amount, they are just happy that someone has appreciated and acknowledged what they have done.
That is hugely powerful,” exclaims Sara Turner, Head of Employee Benefits and Wellbeing, KPMG.
Experts opine that combining regular employee recognition programs with social recognition efforts will lead to even better results.
The Edenred study states that top corporate organizations for best practices in rewards and recognition use a healthy mix of money rewards, non-monetary rewards and social recognition, based on survey data taken from more than 13,000 employees, and HR managers from more than 70 companies across 11 industries.
So, seize the opportunity to engage employees and take recognition to a new level to build an enviable employer brand.