We were thrilled when science fiction and Hollywood movies painted a rosy picture of a futuristic utopia-like planet where AI makes everything possible. However, that initial excitement and hopes about AI and its attendant technologies were crushed when the AI bubble burst. Recent significant breakthroughs, developments and innovations in the field of AI have renewed public interest and excitement about AI possibilities.
AI encompasses several interlinked technologies including robotics, computer vision, neural networks, cognitive computing, machine learning, NLP (Natural Language Processing) and NLG (Natural Language Generation). These technologies have found several use cases in virtually every industry in today's globalized marketplace.
With the continuous evolution of technology in the Industry 4.0 space, many enterprises are joining the trend to develop new functionalities and solutions around AI and its interlinked technologies. Let's take a look at some of the most exciting AI trends to watch for in the latter part of 2018.
NLP, one of the fastest evolving branches of AI, focuses on understanding, analyzing and translating human languages into machine-readable form. NLP-based applications enable simplified human-machine interactions by helping machines understand the finer nuances of speech such as pronunciations, dialects and context.
Furthermore, NLP helps computers develop reading and comprehension capabilities that rival and even surpass those of humans. In January, a deep neural, NLP-based AI machine - Alibaba Cloud - scored better than humans in a Stanford University reading and comprehension test that had more than 100,000 questions. This was the first time an AI model scored better than humans on such a test. The next day, Microsoft's AI machine also bettered the human score.
Most urban cities are ill-equipped to deal with the demands of their exploding populations. City administrators are finding it much harder to provide clean air, water, electricity and easy transportation. Other rising concerns are access to public services and adequate health care. Government organizations also strain to maintain law and order using limited resources.
Most urban cities are ill-equipped to deal with the demands of their exploding populations. City administrators are finding it much harder to provide clean air, water, electricity and easy transportation. Other rising concerns are access to public services and adequate health care. Government organizations also strain to maintain law and order using limited resources.
Resolving these urban population challenges will involve the creation of smart cities. Such cities leverage a mix of "internet of things," big data and AI technologies to better analyze image and video feeds from CCTV footage in real-time to help detect traffic congestion, accidents, crimes and disasters. These technologies also facilitate automatic control of traffic signals to prioritize the passage of law enforcement agencies, emergency response teams and VIPs.
In addition, AI can facilitate the construction of better and environmentally sustainable buildings. Leveraging AI in design and construction activities will help in the management of construction assets, performance diagnostics, the design of customized construction material using nanotechnology and improved selection of vertical framework systems.
With Tesla setting the pace by launching its self-driving vehicle, other automakers are set to follow suit. In 2018 we will see a lot of traditional automakers launching automobiles with advanced self-driving technology. The Audi A8 featuring AI-based self-driving technology will make a debut appearance this year with Volvo and Cadillac set to launch similarly-equipped vehicles soon after.
DARPA, the Defense Advanced Research Project Agency, is pioneering several technological breakthroughs in the field of AI. Tasked with developing new technologies for the U.S. military, DARPA was instrumental in the development of GPS navigation and the internet. Together with Boston Dynamics, the research agency is setting its sights on developing innovative robots for disaster relief. The robots could also be used in military applications. The infamous Atlas robot made its debut in 2013 and is one of the AI-powered robotics technologies still in development.
Brands like Heart, CBS and USA Today are leveraging AI-based solutions and tech to create various content. Wibbitz now offers a SaaS platform where publishers can use AI video production to convert written text into video content.
Most publishers spend days on end creating content for fans, followers and subscribers on social media and their websites. With services like Wibbitz, publishers can create engaging video content in minutes.
The Associated Press is also using Wordsmith, a tool designed by Automated Insights, to apply NLG in the creation of news stories from earnings data. 2018 will see more media companies using AI-based video generation and content creating technologies.
Fintech startups are challenging the incumbents in the financial industry in the provision and distribution of advisory and standardized financial products. They leverage products like AI-based Automated Advisory to offer better financial advisory services to clients.
Further, machine learning models are rapidly replacing traditional methods of predictive analysis to provide faster and more accurate prediction of market trends. These models also help financial companies detect and prevent financial frauds through the identification and prediction of anomalous and fraudulent activity based on historical data.
With Alphabet investing over $30 billion in the development of innovative AI technologies, AI has become the new technological frontier in the Industry 4.0 economy. More and more companies (and even governments) will continue to invest time, money and energy into the advancement of AI and its interlinked technologies.
Although ransomware has been around for nearly 20 years (in various forms), the current age of ransomware began with CryptoLocker in 2013. Since then, hackers have continued to develop more sophisticated exploits.
Ransomware refers to malware (i.e., malicious software developed by cybercriminals) that take over victims' computers and denies them access to their files by encrypting or deleting them. The attacker sends along a note containing the ransom amount as well as instructions on where and how to pay it.
Some experts believe that the increased sophistication of ransomware attacks was precipitated by the advent of digital currencies such as Ethereum and Bitcoin - giving hackers the means to conceal their dubious transactions.
In previous years, ransomware attacks were targeted at medium to large-sized enterprises; however, a troubling amount of ransomware is now directed at small businesses and individuals. As such, it is likely that you have encountered or will encounter various strains of ransomware in the near or distant future. No one is safe from the exploits of cybercriminals who use ransomware to achieve their financial ends.
Tackling this cyber menace requires individuals to understand ransomware, how it gains access to endpoints and networks, its mode of operation, what to do in the event of a ransomware attack and how to prevent such attacks from occurring.
There are two major kinds of ransomware that one cannot remove by simply rebooting the system or clearing the browser cache.
The first type is referred to as screen-locking ransomware. Once the ransomware gains access to the system and executes, it locks out the user and flashes a message on the screen stating the ransom demands. It also displays a warning informing users that the computer will remain unusable until the ransom is paid.
The second most common type of ransomware is known as encrypting ransomware. It works by deleting or encrypting the files stored on infected computer systems and databases. In recent years, more and more hackers are creating and deploying encrypting ransomware. Some of the more popular strains such as Crysis, GoldenEye and Jigsaw are programmed to slowly delete stored files over a 72-hour span.
No matter the type of ransomware, it's always a good idea to verify the authenticity of the threat before taking further action. Fraudsters regularly send out fake ransom notes that claim to delete or encrypt your files if their demands are not met.
Before attempting the removal of ransomware, ensure that the infected system is not linked or connected to any device. Disconnect all peripherals such as webcams, printers, external hard drives, and other external storage media. You should also disconnect the infected system from the Internet.
To remove the screen-locking ransomware from a Windows system, try opening Task Manager and ending the compromised application by simultaneously pressing the Control, Shift and Esc keys. For Mac Systems, open Activity Monitor and press the Command, Option and Esc keys.
If this doesn't work, you need to consider your options. Take a screenshot of the ransom note as evidence in case you decide to file a police report. Restart the system in Safe Mode where you can use a free malware removal tool to disable the ransomware. If this doesn't work, attempt to restore your system to an earlier date using Windows System Restore or Time Machine.
The steps required to remove encrypting ransomware are similar to that for screen-locking ransomware. Once in Safe Mode, you can attempt to recover your files whether they've been deleted or encrypted. If you're dealing with encrypted files, there are solutions you can use to identify the encryption used by the attackers. There are also websites that are equipped with decryption tools that can remove some kinds of ransomware encryption.
If this doesn't work and you have your files backed up on an external device, it's best to reinstall your OS and import these files back to your system.
Generally, you can prevent a ransomware attack by installing robust endpoint security solutions and following these practices:
Ensure that your operating system is up-to-date and patched. This ensures that there are fewer vulnerabilities for hackers to exploit.
Never install software from unknown or untrusted third-party sources. Ensure that you know what a software does before giving it administrative privileges.
Install the latest antivirus software (which detects and prevents ransomware from executing their exploits) and whitelisting software (which prevents the execution of unauthorized software like ransomware).
Backup your files regularly. Although this doesn't prevent a ransomware attack, it reduces the impact of such attacks and enables you to continue executing business-critical functions.
Once you understand the various vectors through which ransomware can enter computers, you should take proactive steps to prevent their ingress by installing AV software and following good security practices.
A blockchain refers to a list of records known as blocks, where each block is secured and linked to the next by cryptographic algorithms. Each block in the chain contains transaction data, a timestamp and a link to the previous block (called a hash).
Transactions are validated by full nodes and the order of the transactions is achieved through a decentralized proof of work. At its core, a blockchain is a decentralized digital database that keeps an unalterable record of transactions.
The Bitcoin protocol was the first successful application of blockchain's distributed public ledger for peer-topeer transactions. In recent years, enterprises have begun to apply blockchain technology in use cases other than p2p financial transactions.
However, a thorough understanding of the different types of blockchain is needed to facilitate such use cases. Let's take a look at the two categories of blockchain protocols, their similarities, differences and their various applications.
Public blockchains are permissionless and open source - meaning that anyone can join the network and participate/benefit from the technology. Since no one controls the network, anyone can input data into the network. Its decentralized nature ensures that all data that has been validated on the blockchain remains immutable.
Each data operation is recorded and confirmed anonymously, forming a record of events that is shared between several parties.
Bitcoin and Ethereum are prominent examples of public blockchain networks. Ethereum also functions as an open software platform enabling developers to build and deploy decentralized applications.
A private blockchain is a permissioned network where access is restricted to invited persons or entities who conform to a set of rules specified by the network starter. Unlike decentralized public chains, a private blockchain acts like a centralized database system that limits access to certain users. The network is controlled by one or more entities and all transactions must be validated by these entities before they are added to the chain
Federated or consortium blockchains are a subset of private blockchains. For these networks, the leaders predefine the consensus mechanism used in authorizing transactions. For instance, a group of financial institutions may create and maintain a consortium blockchain to facilitate transactions between participants. If there are 10 participating banks in the network, the consensus mechanism could be that 7 of the 10 banks must authenticate a transaction for it to be considered valid by the network.
Public and private blockchains are both decentralized peer-to-peer networks where all participants maintain replicas of digitally signed transactions on a shared ledger. These replicas are maintained in sync through protocols known as "consensus." In both blockchains, the immutability of the ledger is guaranteed, although false information can be entered into the network by malicious or mistaken participants.
Public and private blockchain differ in the category of persons allowed to join the network, maintain the shared ledger and execute the consensus protocol.
Unlike private blockchains, public blockchains are completely open networks where anyone can join and participate. To encourage more participants to join the network, public blockchains usually employ an incentivizing mechanism. Public blockchains are maintained by participants who have sufficient computing power to do so while allowing for full transparency of the information contained within
Due to the large-scale nature of the network, public blockchains require a substantial amount of computational power to maintain its distributed ledger. Achieving consensus requires each node in the network to solve a complex and resource-intensive cryptographic problem. This problem, referred to as a proof of work, ensures that each node is in sync
Admittance into a private network is governed by a set of rules set out by the network starter. Enterprises that set up private blockchain networks usually set up permissioned networks. This restricts the entities allowed to join the network. In most cases, these entities are given access only for certain transactions. A prominent example of a permissioned blockchain is The Linux Foundation Hyperledger Fabric.
Access control mechanisms for private blockchains vary. Existing participants may decide how future entrants gain access to the network or licenses for participation may be issued by regulatory authorities. Once an entity is allowed into the network, it must play a decentralized role in maintaining the network.
Permissioned blockchains offer enterprises the ability to facilitate security-rich data exchanges for most industry use cases especially in health care and the financial industry. Private blockchains rely on their participants to authenticate transactions and maintain the integrity of the blockchain protocol.
As such, private blockchains are more efficient in terms of compliance with regulatory requirements and scalability. However, they are more open to manipulation due to centralized governance. The blockchain can be altered or hacked by participants within the network. This is especially true for consortium blockchains where conspiring banks may alter information (such as debt obligations) in their internal networks.
As big data tools bring together large amounts of data for business strategy planning, agile software development can help companies focus on what matters to the company first. Agile development allows for early validation and learning throughout the project.
Big data management and analytics tools are integral parts of business strategy planning in today's highly competitive marketplace. They are used to support various systems including data warehouses and recommendation systems, and can be found within user-facing applications such as ERP and CRM. Big data enables business leaders to leverage both structured and unstructured data to gain a more in-depth understanding of the impact of their business practices and operations on employees, customers, suppliers, and other stakeholders.
As such, big data has become a core strategic asset for most organizations - and has made the management of data a top priority for C-suite leaders. The structured and unstructured data (collated from people and processes) that make up big data can facilitate the development of cutting-edge customer retention and acquisition strategies. It can also help corporate management make better business decisions by revealing areas where processes and products can be made more efficient. This results in the reduction of overall organizational risk.
Most enterprises leverage big data as the take-off point for their digital transformation agenda. They invest in robotics, machine learning, analytics capabilities and other technologies to ensure a successful digital transformation initiative.
Although businesses now spend most of their budget on transforming data-related IT processes and infrastructures, the benefits of doing so are only felt in isolated areas. This is because such processes and infrastructure are created for specific functional areas or business units and are difficult to implement organization-wide due to lack of central governance or end-to-end logic. As such, critical business information remains in siloes and isolated systems.
When it comes to data management, reports show that most organizations face a huge talent gap. IT and business groups have limited expertise in newer emergent approaches to data delivery, data-migration technologies, capabilities and architectures. Therefore, there is a great need for agile leaders who understand the importance and benefits of big data in both tech and business groups.
For businesses to rapidly generate analytics-based insights to facilitate better business decisions and more efficient processes, they need a coordinated data-management strategy that can be deployed across multiple functional and business units. To combat these challenges in big data management, cutting-edge businesses have begun to leverage agile practices in running their data programs.
Agile refers to the time-tested methodology used by software development organizations to develop software and effectively manage the development environment from sprint planning to product release. It is a collaborative approach that helps cross-functional teams to design, build and release software applications, updates and new features rapidly to customers. It is characterized by short iterative development cycles where software is tested, refined and enhanced on a rolling basis.
Similar to Agile software development practices, Agile data also focuses on a joint approach to delivery and development. The IT and business groups in Agile data are the cross-functional teams described in agile development.
Implementing Agile data requires the two groups to collaborate in data labs that focus on generating dependable insights to enable organizations to quickly address its highest priorities and realize more positive business outcomes. Organizations that deploy Agile data realize immediate product and process improvements and set the stage for future innovations and advances in big data infrastructure.
By necessity, agile data relies on several organizational capabilities and core principles. These principles include a business-driven approach to digital transformation initiatives and by extension, data management. Such an approach requires agile-minded organizations to create a list of opportunities for new and enhanced products/processes as well as probable business use cases based on advanced analytics.
The data pertinent to these opportunities and use cases must be collated and analyzed to identify key customer activities and characteristics. For instance, financial institutions that face disruption from emergent digital firms may conduct a detailed analysis of critical business factors (such as time to serve customers or purchase behaviors) to increase product/service offerings to consumers, reduce costs and improve internal processes.
Once identified, business and IT teams rank-order the use cases and opportunities and determine the levels of big data architecture, quality and governance required for each. This results in the creation of two detailed road maps. The first highlights digital business budgets, timeframes, objectives and milestones while the second defines the data requirements needed to provide seamless analytics support and build effective big data architecture.
The collaboration between IT and business teams that is necessary to develop these roadmaps also facilitates the breaking down of the cultural barriers that existed in traditional organizations. IT managers are exposed to the business element while the business team familiarizes itself with the tech end of things.
Furthermore, it ensures joint-ownership of data-management and data-migration protocols. This helps the organization to quickly validate the business case for proposed solutions and ensure high-quality solutions since they are monitored from both a business and tech standpoint. Thus, applying Agile software development to big data ensures continuous improvement.
The Internet of Things (IoT) is one of the hottest topics in the business world right now. If for some reason you've been living under a rock (or a server rack from the late 90s), IoT in a nutshell is when everyday devices are fitted with sensors that collect data that is then transmitted via internet connection for analysis and storage. Heralded as the next technological paradigm shift, Business Insider predicts that between 2017 and 2025, almost $15 trillion in total investment will be made in the IoT sector, and the number of IoT devices will rise from 9 billion to over 55 billion!
With the capacity to transform many different industries, it's imperative that you stay informed on what's happening in this exciting field. So, take a look at our list of the top seven trends in IoT for 2018
Embedded sensors. The true benefits of IoT are realized through embedded sensors. As computer processing units have become smaller, more powerful and more energy efficient, it has become feasible to embed them in nearly any product. Your new smart watch can monitor your heart rate, calories burned and more. Refrigerators can be fitted to precisely control humidity and temperature automatically. Cars in the future will be able to talk to each other to prevent accidents. Inventory management will be revolutionized, as items will be able to be tracked and counted in real time. The possibilities are nearly limitless.
Near-universal connectivity. With cost, size and power constraints becoming less of a problem every year, there will be a time in the near future when almost every device manufactured could be connected and gather data. This concept is called near-universal connectivity and it could happen faster than you think.
Multiple stage infrastructure. An often overlooked aspect of this data revolution is that current informational infrastructures are wholly unsuited to handle the vast amounts of data produced. With 55 billion potential sensors gathering data, an entirely new infrastructure will have to be created to store, parse, analyze and present this information. Multiple stage infrastructure is a potential solution to this monumental issue. The first stage is the IoT hardware itself, namely sensors, actuators and other devices. The second stage is the system that collates the data and converts it from analogue to digital information. The next stage is where the data is processed, parsed and put into the more understandable containers. The last stage is a system where the information is analyzed, stored and eventually presented to an end user. Infrastructure following these parameters could be constructed to make all of this information decipherable and actionable.
Enhanced security. Another major hurdle is making all of this technology safe and hack-proof. You wouldn't want a hacker to be able to gain access to your self-driving car, or tinker with the algorithms of your pacemaker! Before most devices contain sensors and transmit data, there will need to be considerable security measures in place to protect them. That's why, according to Gartner, worldwide IoT spending on security will be over $1.5 billion in 2018 and rise to over $3.1 billion by 2021.
Practical mobile platforms. Since a substantial amount of computing power is necessary to make the best use of an IoT system, most platforms up to now have been constructed using desktop computers. Mobile devices now contain a significant amount of processing power that can handle IoT data. This trend will only continue in the future and soon, mobile-only IoT applications will be the norm. This will be especially helpful in situations where issues such as latency will need to be mitigated. For instance, if your self-driving car had to transmit data to a desktop computer on the cloud, the latency introduced during that transfer would be potentially dangerous.
Increased investment in machine learning. Machine learning and IoT go together like the Dallas Cowboys and underachievement. So, it's no wonder that investment in machine learning correlates positively with investment in IoT. The International Data Corporation (IDC) estimates that investment in machine learning will rise from $12 billion in 2017 to over $57 billion in 2021. Machine learning is excellent at taking a goal (e.g. increase energy efficiency) and using vast amounts of data to find the important input variables necessary to make that happen. With IoT, you can have so many variables and data points that normal data analysis becomes infeasible. Machine learning will help businesses turn information into actionable initiatives.
Self-driving cars. More than $80 billion has been invested in the research and development of self-driving cars by almost all of the major automotive manufacturers. Although obviously more research is needed to ensure the highest safety standards, the benefits of self-driving cars could be enormous. Combined with Smart City technology, your morning commute could be much safer and more efficient. Imagine your car communicating with other cars and traffic lights to reduce slowdowns, accidents and best of all, road rage!
This list merely scratches the surface of what IoT will mean for industry in the future. No matter what sector you work in, take some time to study up on how these new technologies will be affecting your business.
We often hear the terms artificial intelligence, machine learning and deep learning all interchanged. But they are not the same. Both machine learning and deep learning are a part of artificial intelligence, but they are also different from each other.
So, what are these differences and how do they matter in our discussion of artificial intelligence and its future? Here are some key points to remember.
First, it is helpful to describe machine learning, and what it is and is not. While the idea of artificial intelligence started in the mid-1950s with the idea of mimicking the intelligence of humans, general AI was difficult to achieve. Instead, narrow AI tasks limited to specific fields developed. The term machine learning was itself coined in 1959 by Arthur Samuel and was defined as "the ability to learn without being explicitly programmed."
Even this was difficult until recently. Why? Because machine learning depends on a bunch of data input into a model as experience. The machine then performs certain tasks. Based on the data it has and its performance on the tasks, it learns by determining what a successful outcome is. Based on its new experience performing the function, it learns and creates new and better ways to complete them.
Sounds complex? That's because it is hard to come up with a simple definition of machine learning that really explains it, and true machine learning takes a lot of computing power. This is because the data needed to count as experience must include hundreds, if not thousands, of data points. The more data, the more accurate the machine learning becomes.
That being said, there are a few types of machine learning. The first two are supervised and unsupervised learning. What does that mean exactly? Supervised learning is when the program is given a large set of data with correct answers.
For instance, when it comes to recognizing hand-written numbers, the program is fed large quantities of handwritten digits, each with a label identifying the number pictured there. Then is it is given a new set of numbers without tags and uses what it has learned from the known labels to identify images it has not seen before.
This is how you are able to deposit a check into your bank by taking a photo with your phone. The program recognizes the handwritten digits and translates them into known numbers to get the amount of your deposit.
Unsupervised learning, though, simply finds similarities between data. In other words, there are no labels. The idea behind this kind of analysis is to cluster data into groups based on similarity while at the same time compressing the data, so it takes up less digital space but is still useful.
Unsupervised learning is often used to group and organize data before it is given to a supervised learning network. For things like digit recognition, image recognition and other similar tasks, it is used to gather similar data that is then labeled, like the images of the number 1.
The reinforcement type of machine learning is pretty simple in principle. Think of your puppy you are training. If you say the word "sit" and he sits, he gets a treat. Eventually, he learns that if he sits when you tell him to, he gets a treat.
The same is true for a machine. The more often it performs an action with the same result, the more it understands that one action results in a certain outcome.
Deep learning is, at its essence, a part of machine learning, but the most sophisticated portion. There are multiple layers of input and data collection, and all of these layers are combined to determine a result.
For example, think of a self-driving car, even at the level we have now. One layer detects lane boundaries. The other detects road conditions and still another detects other vehicles. Yet another level pays attention to a location on a map and factors in the vehicle's destination.
All of these factors combine to control speed, steering, lane position and changes and turns that are made to maintain the optimal route.
That is a lot of information to essentially say that the primary difference between machine learning is the level of complexity in data input and the computing power needed. Just as machine learning is a part of artificial intelligence, deep learning is a part of machine learning, but it is like machine learning on steroids.
If machine learning has the ability to think like a small child, deep learning takes that ability to an adult level, and it is hoped, at some point, to reach genius level. In this way, they are two of the same thing, one young and inexperienced, and the other a professional.
In last year's competition, Ransomware was crowned the leading method of cyberattack. But this year there's a new face in town as cryptojacking slowly rises to the top of the list. Riding the wave of cryptocurrency's popularity and its volatile market, cryptojacking has become a preferred way for cryptocurrency miners to cash in. Let's take a deep dive into this rising cyberthreat and the significant detriment it can pose to your business.
To understand how cryptojacking works, we must first understand the impetus behind the surge in this practice. In short, cryptojacking is the result of cryptomining gone rogue. A cryptominer uses their computer systems to verify cryptocurrency transactions and add them to the digital blockchain ledger. Miners receive a small commission of cryptocurrency in return for these efforts. But cryptomining requires significant processing power; it takes an up-front investment into computer hardware, as well as constant electricity to run these systems. When the value of cryptocurrencies falls, it becomes much less profitable to mine, and miners must balance the cost of the electricity against plummeting profits.
Here is where cryptojacking takes to the stage. Cryptojacking is the act of performing cryptomining on a system without permission from the system owner. Cryptojacking allows miners to avoid shouldering the energy and costs by using systems that are not their own. These jackers infect websites, create ransomware, or send out malicious email links to get their mining code onto the systems of unsuspecting victims.
Historically, cyberattacks have required victims to install a program onto their system. But this recent evolution is much more unnerving. The latest in-browser cryptojacking scripts do not need to install a program in order to run on an unsuspecting system. In-browser cryptojacking requires nothing more than for the user to load a browser page, watch an online advertisement, or click on a phishing link.
Cryptojacking is so successful because it often goes entirely unnoticed by its victims. For example, the malicious Javascript embedded in a browser simply runs when the page is loaded, requiring no opt-in or installation by the user. The offending code often works behind the scenes, bypassing virus software and running on the system long after the browser is closed. Unlike ransomware that exhorts money from its victims, cryptojacking is passive, quietly consuming extra computer processing power without permission. The script simply eats up processing cycles in the background, undetected by all but the most vigilant of computer users and network security teams.
The prevalence of cryptojacking should be of great concern to any sensible business owner. In November 2017, Adguard reported an in-browser mining growth rate of 31%. In a recent report by McAfee, the incidence of mining malware alone rose by 629% in the first quarter of 2018. Because cryptojacking provides a low-risk money-making effort with a low barrier to entry, its popularity is expected to continue to boom.
Cryptomining requires processing power to run. A slightly slower system may sound fairly benign when applied to any standard computer user, but more dangerous is the prized target for cryptojackers: your robust corporate servers. A server is more powerful than a single workstation, and it offers cryptojackers a jackpot of processing power and free electricity, providing them a much more profitable and energyintensive operation. Over time, a cryptojacking infection will compromise processor performance and drive up overhead costs by shortening the lifespan of your organization's hardware.
Cryptojackers do not make money by conducting high-visibility stunts such as stealing your personal data, making threats, or eavesdropping on private communications. Alternatively, a server compromised by undetected cryptojacking can cost a business big money by slowing down web services and causing unintentional downtime. This downtime translates into a high sacrifice for your business and costs you profits, customer loyalty and marketability.
It can be difficult to determine when a system has been hijacked by illicit cryptojacking. When it comes to securing your business, knowledge is power. Use the following prevention techniques to help you protect your organization:
Ensure your security team has a performance management process and can identify cryptojacking's effects on your processing power.
Install ad-blockers and anti-cryptomining extensions to ward off attacks.
Incorporate the cryptojacking conversation into your security awareness program.
Keep your web filters up to date
Implement a behavior analysis system to more easily identify anomalous network activity.
Segment networks and apply restrictions and system interaction rules.
Routinely patch and update all computer systems.
Cryptojacking has exploded in 2018 as the frontrunner in cyberattacks, and it can cost your business in productivity, uptime, brand loyalty, energy - and most importantly - profits. When it comes to cryptojacking, prevention is paramount. ESolveIt helps you batten down the hatches on your business and implement practical security strategies and solutions that protect your systems and suit your specific needs. Cryptojacking may not be going anywhere soon, but with ESolveIt on your side, neither is your business.
Although most people believe otherwise, most forms of "anonymous data" can be used to identify everything about an individual ranging from purchase histories to medical records. We release little pieces of our lives each day, either by subscribing for online services and newsletters or filling out forms for Facebook and other kinds of online surveys.
Although service vendors assure us that the data collected is anonymous, these digital breadcrumbs can be reconstituted into a cohesive whole that can be traced back to their originators.
In August 2016, the Australian government released the medical billing records, including surgeries and prescriptions of 2.9 million people. Although the released records were anonymized (with names and other identifying features/info removed), a University of Melbourne research team proved that it was easy to reidentify people and access their medical history without their consent.
When the Federal Trades Commission began investigating the unauthorized access of data belonging to over 50 million Facebook users, the issue of privacy was once again brought to the fore. The data, which was accessed by Cambridge Analytica, was provided by Aleksandr Kogan, the developer of a Facebook personality quiz app.
Approximately 270,000 people installed the personality quiz app on their Facebook account, and Aleksandr (like other Facebook developers) had access to the data on both the users' and their friends' Facebook accounts. Before installation, his app asked for permission to access users' (and their friends') data and then stored the collated data on a private database instead of deleting it. Once Aleksandr provided the database to Cambridge Analytica, the voter-profiling company used it to create 30 million "psychographic" voter profiles.
If seemingly innocuous data on social media can be easily obtained and contains enough identifiers to build a complete psychographic voter profile, such data can be used by cybercriminals to perpetrate any number of malicious objectives.
A prime example of anonymized data (that was later discovered to be not so anonymous) was the NYC Taxi and Limo Commission dataset release. The dataset contained the details of over 1.1 billion individual taxi trips in the city, including fare and tip amounts, locations, drop-off and pickup times, as well as hashed versions of the taxi's medallion numbers and license.
With some auxiliary knowledge, hackers could de-anonymize the data set and identify the weekly habits of individuals - where they went, how much they paid, where they spent most of their time, their home and work address, socializing patterns, et cetera. The information gathered could be used to track movement patterns to further the hacker's malicious objectives.
It is worrisome that 87 percent of individuals in the U.S. can be identified by their gender, five-digit zip codes and date of birth. Computational privacy researchers have also raised alarms about how the majority of individuals could be uniquely identified by their behavioral patterns based on location data from their mobile phones.
By analyzing the mobile phone database of approximate locations of 1.5 million individuals over a period of 15 months, it was possible to identify 95 percent of the individuals with only four data points of time and place. In fact, only two data points were needed to uniquely identify about 50 percent of the individuals.
The data points could be collated from publicly available information such as work address, home address and geo-tagged Twitter/ Facebook posts.
The implications of the above research are far-reaching. If four data points are enough to uniquely identify individuals, it means that anonymity no longer guarantees privacy, thus rendering ineffectual most of the laws and regulations concerning consumer privacy.
However, true anonymity can still be achieved in the digital age. Individuals should regularly review the privacy settings on their social media accounts and scrutinize the data they give out when filling forms and participating in online surveys.
Compliance with the GDPR (European General Data Protection Regulation) is a step in the right direction and can help boost data privacy. Data should be rendered anonymous in such a way that the data subject is no longer identifiable and data subject rights such as the right to be informed, the right to data portability, and the right to be forgotten should be enforced
Privacy concerns are on the rise and will only get worse in the Industry 4.0 era. With individuals spending most of their lives online and leaving digital breadcrumbs everywhere (which can ultimately be traced back to them), no one is truly safe.
Although it's convenient to pretend otherwise, researchers have shown how easy it is to re-identify people from their digital footprint. Even anonymized data can be reverse-engineered and reconstituted to uniquely identify individuals
Once data gets into cyberspace, it tends to stay there forever. One of the ways to ensure privacy is to reduce as much of our digital footprint as possible. Privacy laws should also target custodians of consumer data (such as companies, researchers and governments) and force them to shoulder more of the legal responsibility of ensuring privacy.
Although some people use the term blockchain interchangeably with distributed ledgers, these two buzzwords are not quite similar.
The popularity (or notoriety) of cryptocurrencies brought blockchain into the limelight and made it synonymous with tokenization and smart contracts. However, blockchains are just one implementation of distributed ledger technology (DLT).
For over a decade, blockchain was the only known (and first fully functional form) of DLT; however, the rapid advancement of the crypto industry has precipitated the creation of other DLT systems such as RaiBlocks (now NANO), Hashgraph, peaq and IOTA, and the Tangle Network.
These other forms of DLT are rapidly gaining popularity, thus reducing the industry's reliance on traditional blockchain systems. With more projects leveraging these forms of DLT, it is essential for enterprises to understand the differences between blockchains and distributed ledgers.
A distributed ledger is a database that exists among several participants or across several locations while DLT describes the technologies used to publicly or privately distribute information and records to the entities who use them.
Distributed ledgers are spread across several computing devices or nodes, where each device/node replicates and saves identical copies of a record to the ledger.
One of the most sought-after features of distributed ledgers is its decentralization. Individuals and organizations typically store their data on centralized databases that live at fixed locations, necessitating the use of third parties.
Distributed ledgers are not maintained by a central authority; they are decentralized, shifting the responsibility of managing data from intermediaries or a central authority to participant nodes. Enterprises can use DLT to validate, process and authenticate transactions as well as other forms of data exchanges.
Each update on a distributed ledger is independently constructed, and recorded by individual nodes. Before an entry is uploaded to the ledger, it must be validated (through voting) to ensure the addition of a single true copy. The voting is automatically carried out by a consensus algorithm. Once consensus is reached, the ledger updates itself and each node saves the agreed-upon copy of the ledger.
The structure and architecture of distributed ledgers help to cut down the cost of trust, thus reducing dependence on regulatory compliance officers, notaries, governments, lawyers and banks
Distributed ledgers offer individuals, enterprises and governments a new paradigm for collecting and communicating information and is set to revolutionize the way these entities interact with each other.
Blockchain is a type of distributed ledger where data is organized into blocks that are logically linked together to provide a valid and secure distributed consensus.
Like all distributed ledgers, blockchains do not depend on centralized authority or servers; they are managed by and distributed across peer-to-peer networks. Data quality is maintained by computational trust and database replication. However, the structure of a blockchain is unique and distinct from other forms of distributed ledgers.
The data on a blockchain is grouped and organized into blocks. Each block of data is closed by a cryptographic signature known as a "hash." This hash points to the next block, thus creating an unbroken chain of continuous data. The hash ensures that the encrypted information within each block cannot be manipulated.
A blockchain is a continuously growing list of records. It is built on an append-only structure, making deletion and alteration of data on earlier blocks impossible. Data can only be added to the database.
As such, blockchain technology is best suited for applications such as voting, tracking assets, processing transactions, managing records and recording events.
Every blockchain is a distributed ledger but not all distributed ledgers are blockchain. The two technologies share a conceptual origin; they are a digitized and decentralized log of records that require consensus among participating nodes to ensure the authenticity of data entries
However, they update their databases differently.
Blockchain organizes entries into blocks of data and uses an append-only structure to update its records. Once entries are made, they can't be deleted or modified in any way.
Under DLT, database owners have greater control over implementation. In principle, they can dictate the purpose, structure and functioning of the distributed ledger network. However, the network retains its decentralized nature since ledgers are stored across multiple servers that communicate to ensure the maintenance of an accurate and up-to-date record of transactions. DLT provides an auditable and verifiable history of information stored on a particular data set.
Researchers are beginning to find new and more interesting uses for blockchain and other digital ledger technologies.
These technologies are set to disrupt traditional processes and procedures in virtually all industries and have already had a significant impact on the financial sector (especially in the area of regulatory compliance and compliance policies).
Studies by Accenture show that by 2025, investment banks that leverage distributed ledger technologies may be able to reduce compliance cost by 30 to 50 percent.