IAPP CIPM – From Small & Medium Enterprise (SME) to Multinational examples Part 4
Hi, guys. In this lesson, we’ll discuss the Practical Guide for Small and Medium Enterprises, or SMEs. The guide includes 26 steps that you need to follow in order to become GDPR-compliant. So first, let’s start with creating a Data Protection Compliance folder on your company file system. This will form the basis of your proof of compliance. Every step you take towards GDPR compliance should be documented to be used in your defense. If necessary, keep notes of internal meetings on GDPR and decisions made on GDPR. Designate a data protection officer at the optional mandatory We also discussed how this responsible data protection officer person may handle pure data.
Establish what data your business collects and where. Separate the data into categories. Identify the lawful basis for processing each category of data. Refresh consent where necessary and consult with third-party data processors, for example, Mail Chimp, to ensure they have established compliance. implement a policy to identify and handle any data subject to request, Implement a policy to identify and handle any data erasure or correction requests. Create a document of non-compliance issues to show awareness of compliance omissions and to plan towards total compliance, or at least thorough risk mitigation. Create a password policy for all users. It’s either staff or the website, et cetera. Keep a record of consents for users who have already opted in to your marketing program.
You’ve already asked for it, and I’m sure you did, which is why you need to keep track of who has opted in and who hasn’t. Create a retention schedule for data. When the data has reached the end of its retention period, destroy it in accordance with the data destruction policy that will minimize the data you hold. Train your staff so they all understand what constitutes personal data. Bonus points for practising case scenarios with your team and for putting together a staff GDPR awareness status report to note who has participated in which training. Train yourself to identify a breach again and learn how to avoid email scams and phishing attacks. have a breach response policy.
Create a data breach log to record events, such as a specific user. Email the client list to I don’t know a specific client or other user in the finance team, not to another guy in the sales team, and so on. Ensure your website is https. Ensure your office computers are encrypted, which is security by design. Again, go to settings, security, and privacy. For example, on a Mac and file vault, review and document the physical security of data on USB disks, paper filing systems, behind lock and key, et cetera. Securely lock away any personal data.
To achieve that, website owners must make a decision about the types of data they collect and whether that information is necessary in order to perform the task for which the information is being collected. Any data collected or processed should be limited to the minimum amounts necessary to achieve the purpose for which it is collected. GDPR also requires all personal data to be secured, so encryption should be considered. If you use any kind of analytics programme on your website—for example, Google Analytics—it is your responsibility to ensure it is compliant. Google has taken care of its side, but it is the responsibility of all website owners to ensure analytics programmes meet GDPR requirements. If tracking data is collected that allows an individual to be identified by their IP address, for example, consent must be obtained. It is important that website visitors can get in touch with the site owner to exercise their GDPR rights and freedoms, so all contact information needs to be up-to-date.
It must be easy for visitors to make contact should they wish to exercise their right to be forgotten, request a copy of any data that is collected and processed, and check their personal data for accuracy. In the event that a website visitor chooses to be forgotten, it is useful to have a mechanism in place that allows that to happen automatically via the website. Manually completing such a task will be time-consuming, especially if multiple requests are received. It is the responsibility of all website owners to become acquainted with GDPR rules and to ensure that their websites are GDPR compliant. If you own or operate a website, read up on GDPR requirements. Check to ensure that consent is obtained prior to the collection and processing of personal data, that the data subject’s rights and freedoms are protected and honored, and that all personal data is securely stored. You must also develop policies and procedures to identify and deal with data breaches. If a breach is experienced, the supervisory authority must be notified within 72 hours.
Hello, guys.In this lesson, we’ll talk about outsourcing your DPO in real-life scenarios. Once the necessary data protection officer skills are identified, the specific DPO is chosen, and the DPO services contract is agreed upon with the controller or processor. The DPO can begin to undertake the tasks specified in Article 30.39 of GDPR in performing the outsourcedDPO role, and a lot of interesting questions can arise about how various data protection scenarios will be evaluated unconstrained. Or does the likelihood need to exceed 50%, for example? What is the role of intent despite its absence from the definition of processing under Article 4.2? Does the encryption of the personal data or its type and quantity impact the analysis? How about the types of vulnerability testing scans being done or the level of coding standard or information security standard achieved on the code system being scanned? The takeaway is that the easiest solution may be to always consider any vulnerability testing firm requiring a processing agreement. Photographic Images Photographs are used all over the web, from social media to advertising, but may fall into a special category of personal data if they are used for authentication and identification and are processed by technical means. Exactly what type of technical means used for authentication and identification change a photographic image from being personal data to biometric data? Under existing data protection law, photographic images appear to be considered a type of biometric data.
For example, there is a case known as Willem’s Case: it identified a facial image as biometric data, citing the register number 2251 from 2004, and stating that biometric identifiers should be integrated in the passport or travel document in order to establish a reliable link between the genuine holder and the document. However, resident 51 of the GDPR states that p If that person uses computer software to help enlarge or sharpen an image, are those considered to be technical means, and then the photo is considered biometric data? This topic requires additional clarification from a DPA as to where the boundaries are, as well as a special focus for companies that use digital images for identification and identification on the type of process they use. Unified Consent Consent is required to be able to load an app onto a mobile device and then get information from it, but this can lead to many requests for consent.Unifying these requests allows the user to more efficiently start using the app.
Under GDPR, consent is required to be unbundled and granular, meaning that consent must be separately obtained from other terms and conditions, and assume that the mobile device app will collect the name and health data of the individual. This means that there are four different types of consent in play: to install the app, to access information on the device, to process the personal data, and to process the special category of personal data, for example, the user’s hopes. How should the requests for consent to each of these destiny consent requirements be presented? As one, two, three, or four requests are considered, the information that must be presented for each consent would be somewhat different. How would this change? If the app is gathering information from the mobile device itself, such as device or location information, or the user’s contacts, Balance is important, so the required informed consents are gathered with the least amount of disturbance to the app users’ experience, perhaps through the use of layered privacy notices.
App Developer Many app developers will use an app for their own purposes, so they are considered the controller under GDPR. But when might an app developer not be a controller? The test is, of course, that the controller determines the purposes and means of processing. When an app developer collects personal data for their own use, including information provided by the device, they would be considered a controller.
But what about when the app collects personal data that will be used by another firm setting the rules for personal data collection and use? The second firm is clearly a controller, and the app developer appears to be a processor acting under the controller’s instructions. Working Party 1693 states that a processor could operate further to general guidance provided primarily on purposes and not going very deep in details. Referring to the history of the Data Privacy Directive, “means” is a shortened designation that does not only refer to the technical ways of processing personal data, but also to the how of processing, which includes questions like which data shall be processed, which third parties shall have access to this data, and when data shall be deleted. Therefore, any processor determining these essential elements of the means would be classified as a controller, but not if merely determining the technical and organisational means of processing. Applying these definitions to the design and development of application software would not fall into the category of means, as at least, the app developer is determining which data is processed and possibly how long it is kept, regardless of whether the app developer is determining which data is processed, whether the data collected by the
Guys. In this lesson, we’ll discuss the legal response to data breaches in the cloud. Cloud computing, as it moves closer to being a public utility like power and water, will be defined mostly by the risks involved. These include data privacy. risk, as is often the case with new IT services. With the marketing boom, the risks of cloud computing tend to be minimized by the marketers. Yet it is only by understanding, assessing, and managing those risks that confidence in cloud computing can expand significantly for both organisational and personal users of the cloud.
Given the increasing deployment of bring your own device into the corporate space, the prior distinctions between organisational and individual data and processes are becoming blurred, and thus the cloud risk evaluation process should be applicable to all types of users. When evaluating the risks of cloud computing, organisations and individuals need to take a hard look at both themselves and their cloud service providers. Cloud consumers first need to understand how they organise and manage their confidential data, which then provides a foundation for assessing their CSPs.
The standard methodology can be used in evaluating the risks for both cloud consumers and CSPs, whether the outsourcing is to private clouds, hybrid clouds, or public clouds. And regardless of the service model used, there are six major categories of cloud computing risk: legal; data protection; contracting; governance; verification; and response. Legal risk comes from the totality of all legal obligations that an organisation has under all cloud-related statutes it is subject to globally. Data protection risk involves the design, implementation, and evaluation of safeguards by the cloud consumer and CSP to protect the privacy of data. Contracting risk is the extent to which cloud users have legally protected themselves against unfavourable cloud-related events. Governance risk looks at how interoperable data and processes are and how portable they are to new CSPs. Verification risk comes from the comprehensiveness and quality of independent third-party assurances about the CSPs used. Response risk involves dealing with security-related incidents and incidents that impact the consumer’s privacy, including data breaches. Privacy concerns arise as a result of both the data protection risk and the response risk.
The protections to safeguard the privacy of data are well understood and not new at all with cloud computing, although they do emphasise certain controls. For example, encryption is a must-have in the cloud computing world. Encryption must be deployed not only during transit from the cloud consumer to the CSP but also while stored by the CSP on disk, in mirror sites, on backup tapes, et cetera, and also in use to the extent possible. The legal response requires that organisations comply with a variety of statutory and regulatory requirements for notification to get law enforcement and regulators involved and for imaging or safeguarding potential evidence. There are many different data breach notification laws globally, often part of local privacy laws, and these are growing. It is important to remember that when cloud consumers enter the cloud, they have, by default, become global players, meaning that they will likely be subject to the data privacy laws of more than one country.
In Europe, the privacy directive requires European Union member states to implement local legislation for service providers responsible for hosting and transmitting consumer data to notify the appropriate national authorities upon the event of a data breach. If consumers’ data is breached and the breach could have a negative impact on them, they must also be notified. While there is yet no general federal data breach notification requirement in the US, There are sector-specific regulations in health care and financial services for reporting data breaches. Also, there are general data breach notification laws in almost every state. These laws typically require consumers to be notified if their data is breached, exposing them to a risk of harm.
This is most typically the case when the data is personally identifiable information or financial information that is stored in an unencrypted format. What may vary between the different state statutes is the type of information that must be reported, to whom it must be reported, and when it must be reported. These laws are constantly changing, as several US states, including Connecticut and Vermont, have recently revised their data breach statutory requirements. In the Asia Pacific region, there are both voluntary guidelines and industry-specific requirements to report breaches. For example, Australia had no general data breach statute, but this has become mandatory in Hong Kong. The proposed changes to the local privacy ordinance will make the breach notification process voluntary, but the government has promulgated guidelines and templates in advance of those changes. Japan has industry-specific regulations regarding data breach notification. In Taiwan and South Korea, newer revisions to privacy laws require data breach notifications. In China, local versions of data breach laws complement national breach notification regulations for service providers.
The legal response to a data breach when data is outsourced to the cloud essentially comes down to answering a series of questions: what data breach notification and privacy laws are implicated by a data breach at the CSP? Given that the data servers and consumers may be situated in disparate countries around the world, who is responsible for reporting a data breach? Is the CSP the cloud consumer? When must the bridge be reported? Immediately after an investigation, or perhaps never? To whom must the bridge be reported? Local data protection authorities, industry regulators, local and/or international law enforcement agencies (such as Interpol), the Department of Justice, and so on are all examples. In what circumstances must the data breach be reported, such as when a certain number of records or a certain type of sensitive data was breached or when criminal activity is suspected? What types of information must be reported? How does the CSP know, in a virtual resource multitenant cloud environment, which cloud consumer data has been breached? And what type of evidence must be saved for future criminal investigations or civil litigation? For example, network and system logs or data system images?
And how can this be done in a multitenant cloud environment? The example guidance from the Hong Kong government provides some insight into part of the legal response. It suggests that the data custodian first gather information, including when and where the breach occurred, how it was detected, the cause, what type of personal data was affected, and the number of data subjects potentially impacted. It recommends informing data subjects when there is a reasonable risk of harm. In its bridge notification, it suggests including the date and time of the breach and its discovery.
Personal data has the potential to cause harm as a result of the breach. The remedial measures to ensure no further data loss, the contact person’s number, the law enforcement or other agencies notified, what is being done to assist affected consumers, and what they can do themselves to mitigate the risk of harm, such as identity theft and financial fraud, With data breaches, all cloud consumers should take the approach that the question is not if they will happen, but when, and will they be ready? Data breaches, like business continuity plans, can and do occur, and some of the most well-known brand names and organizations, even those with a strong public Internet security profile (CSPs) by centralizing cloud consumers, are a target for bad actors. So cloud consumers should create and test a robust response plan to use when the data breach event occurs and the privacy of their cloud-based data is compromised. This plan should address all three areas of cloud data breach response as explained above, including the legal aspects. Only then can cloud consumers confidently expand their footprint.