<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=489233&amp;fmt=gif">
Webinar Recap

Data security of a cloud-based
application in Life Sciences 

Agenda

04:30
How is security handled within Amazon Web Services (AWS)?

08:10
Shared responsibility and disaster recovery plans

15:10
Data protection 

19:00
Data storage in the cloud

28:30 
Q&A session

57:30 
Conclusion

Data security is absolutely paramount in the life science sector, as we all know. So how exactly is sensitive data handled by cloud-based solutions to ensure it’s protected?

We’re thrilled that Patrick Lamplé, the expert on data security in the life sciences sector at Amazon Web Services (AWS), joined Scilife CEO Filip Heitbrink in this webinar to share his valuable insights on the subject.

The two will sit down virtually to discuss what Life Sciences companies need to be aware of when choosing SaaS tools that store their data in the cloud. They’ll also dive into the details of how responsibilities are spread between the AWS cloud provider, the SaaS application provider, and you, the customer who is the data owner.

Q&A's from the session

1. Is customers' data encrypted?

Patrick Lamplé (AWS)
So encryption is part of the customer decision. So by default, we suggest encrypting everything. That's part of the best practices also in this domain. And I think what is interesting with AWS, is that you can use the keys that you manage in your AWS environment.

You can bring your own key or also we have some the situation when you do hybrid mode, where you get the key outside the universe. So everything that passed through AWS is encrypted. So it depends on how you want to do it, what is your approach to encryption and what you need to achieve there, and how far you want to prove it?

What is very interesting is that if you're using the key management services in AWS for example all are audited. So, people or the role of accessing the key at that moment is a lot. So it's also a great way to say the key was not stolen. And as it is an event in your trail, you can create a notification to your security officer. That it's something that should not happen, or you can automatically remediate and block the access CBT or rotate also your keys. So we help customers there because it's a difficult topic when you don't have the services. I guess you're doing the same kind of thing.

 

Filip Heitbrink (Scilife)
Yeah. So all the best practices are implemented. You already mentioned key rotation. You also mentioned encryption in transit and at rest. Again, these are maybe technical terms in transit means that data that travels between servers or between the user and application or data that travels within Amazon services are encrypted. So for example, to take it to the user level, you cannot log into Scilife over a, just an HTTP connection, which is un-encrypted. It is always redirecting to HTTPS. This means that between your browser because in the end, it's a web-based application between the browser and the Amazon server, the connection is encrypted.

And then once we go to, for example, backups or data storage. Everything that can be turned on in terms of encryption is storing on within the service at Amazon, which means that also addressed. So that data that is not moving that is stored at some place is then stored in an encrypted way. If something would happen, data is stolen, or whatever the person could not really do anything with that data. Or if you would be listening into a connection between the user and the servers or somewhere somehow within the infrastructure, you would see only just garbage going back and forth because it's all encrypted, and even if you would be able to copy that it's useless.

 

Patrick Lamplé (AWS)
I think there are securities in the innovation all the time and we are using a specific hardware and software solution in AWS called Nitro, creating by default encryption without impact on performance, on the connection between the hardware, but also the remains. And we also announced that it's available at the Nitro enclaves so we can decrypt the data to process only in isolated, no human access, no software access environments.

We also push and use instances with memory encryption by default. So all these elements are improving over time and making sure that you raise the bar and not waiting for an issue to say, ah, we might do something more. Now it's continuously evolving and simplifying this, but it's very important to understand that a is also based on the discussion of what you want and adapt this all the time.

 

2. How do you deal with backups?

Filip Heitbrink (Scilife)
There are basically two types of data. First, the file store, so any file that is uploaded to a Scilife or any file that is generated by Scilife in terms of reports or downloadables or whatever. So those files are stored in the S3 service of Amazon, which is what they call object storage. The particularity of S3 is that as soon as a file is saved in S3, it is automatically replicated over. I think it's now for geographically separated data centers. Correct, Patrick?

Patrick Lamplé (AWS)
Yeah, but in the region, you have six different copies of it to make sure that you have the job availability and reliability of your data. It's 99,9 of durability.

Filip Heitbrink (Scilife)
We're talking about six geographically, separate data centers, and this comes out of the box with three. We didn't even have to configure it, I mean, we save the file there it's automatically replicated in, in the different data center. So that's on the file level.

Then we have the database data. Then we have actually the database, the. What we do there is we always have a master-slave setup. We have separate databases for the test environment, production environment, and validation environment. And then for example, for production, we have this master-slave setup where we also write data on two servers in real-time to make sure that if one data store goes down, we still have the other one.

Additionally, what we do is to make a backup of the whole database of the customer. Every five minutes with 30-day retention. And this is also mentioned in the contract. So according to the contract, we have this backup scheme in place and through the disaster recovery plan, we test if that data is recoverable twice per year to make sure that this is not only just a feature that we turn on, but it actually works for us and we know how to do it.

We also use Amazon's Multi-AZ (multiple availability zones) that ensures data is not only stored and backed up in one region but it's split over. What's copied and duplicated over different regions to increase the durability of that data that is stored in the database.

 

Patrick Lamplé (AWS)

In case you have an Availability Zone that is not available for any bad reason, worst case scenario, you still have a copy somewhere else. So you can switch automatically to the secondary database. And you keep your business continuity there. We have even global database architecture where you want to copy data everywhere in the world. It really depends on what you want to achieve. In the case of Scilife, you're completely following what should be done for this kind of use cases.

And again, it's always about a discussion. I seen situation where people were over estimating what they needed to do and make it simple is important. Also simple and efficient. And other case also we ask and say, okay, what happens if you don't put the backup system?

So it's really part of the discussion and that's what I really like. It's continuously improving, raising the bar for everyone.

 

3. Do Amazon AWS employees have access to our data?

Patrick Lamplé (AWS)

No, in Amazon we are "customer obsessed". The company and customers are using the platform and the data. Very important and very private, and we don't know what they do on the platform.

We know they are using storage or computer this amount, but what they do with it, we have no idea. So there's no way for Amazon employees to access data, except if the customer specifically give us access through the IAM console to founder the identity and access management, giving access to someone from AWS.

And even in this case, we would say we need a contract. We need to make it clear. 

And that's part of why the overall aspect is part of the security. Why we should know what you do with the data? There's no good reason for that. So obviously we're not touching this information.

 

4. How is Amazon AWS compliance somehow with GXP?

Patrick Lamplé (AWS)

So GXP is obviously one of the big topics in this domain. As you know, GXP is not a certification, it's not a security checklist.

It's a principle. It's a "do whatever is needed to protect the patient's data". And then making sure scientifically it's relevant, medically relevant. So there's no certification to be GXP compliant. But AWS allows you to beat GXP compliance. Hundreds of customers in this space, big pharma and biotechs and startups, any type of customer that are running the GXP works on database.

The way you do it is is by doing a classical architecture definition, checking that you have everything you need to do. We have a white paper in fact explaining the customer how to get the documentation. They need to be GXP compliant. It's on the ISO 19 99 0 1 on the QMS system.

You can go to the service called "Artifact", which is in  the AWS console where you can look at all the reports and all the certificates. And we have specific document for that.

https://aws.amazon.com/artifact

We have partners also working in this space. So we have people internally that can help with it, like me. We have also partners who provide consulting or even technology solutions to help you achieve your GXP compliance.

So it's completely feasible. And I guess, let's go back to you because you are using AWS and you're in this space, how you manage the GXP in Scilife?

 

Filip Heitbrink (Scilife)

Basically through a GAMP 5 validation process. This is something that is the whole basis of our solution.

We do a full GAMP 5 validation on our end of each release that we do. Of course there is a difference between a minor or a medium or a major release according to the documentation that we develop and execute. But in the end, everybody is using the same application in production. As customers are onboarded and then their data is separate, but still the application layer for all the customers is the same.

So what we do is a full GAMP 5 validation when we go live with a specific version of a module and it includes the whole thing: from your user requirements, functional specs, design specs, conflict specs, we execute performance qualification tests... We have the whole traceability matrix.

So of course we cannot validate the solution before you go live with it and somehow when we develop it, but it all depends. Again, we have here, the three layers, right? One thing is what you provide within Amazon. The other thing is what we provide as a tool, as a life sciences platform we validate internally and then you can use. it

According to your use case, the customer needs to do the last validation steps on their end to say, this is the tool that I'm acquiring out of the box. It does this, this is how I'm going to configure it according to my needs and how I'm going to use it. So that last part is of course the responsibility of our own customers, but essentially the whole validation documentation package that we provide, which is typically a lot of work and very costly.

It takes away 95% of the validation efforts for our customers. It doesn't make sense to provide a non-validated solution. Which is the same identical tool for everybody. And then just tell the customer, you validate it yourself. They would essentially be generating very, very similar documentation according to the features in there.

So it doesn't make sense to do it that way. That's why we go very, very far into providing the full validation package from us and providing a very complete validation, documentation package. Base their last validation steps and checklists on. So that's how we make sure that the tool itself complies with, with their GXP requirements.

 

Patrick Lamplé (AWS)

I think there's a lot of automation behind that. So there's a lot of things we can do dramatically. There's a lot of documentation testing we can do. In the past life we needed to test everything because new servers, new environment, everything needed to be tested.

This can be done automatically. So infrastructure has killed the cloud technologies, to help you to do that. And also how you build your business case. A lot of customers like you are providing this solution on quality management systems. So we expect to be GXP compliant.

They are using for GXP. So retesting everything for everyone doesn't make sense. I'm supporting the customer in that transformation. In the future we want to embed the compliance and security. Put validation as part of the objective. It's a feature for the customer and as we are innovating, and as you perfectly said, inside out, you're creating new models, new version of the model. You need to keep up with this and don't redo everything all the time again.

That's also how we build the services in negative impact. We make sure that we deliver to the customer what they expect GXP. We've built this way. So the way we test, before deploying, we test everything, all use case to see how it goes.

And it's part of a very deep engineering approach to it to make sure it happens. And I see that you have the same approach. The value for the customer is not GXP, it's a no obligation. It's a practice also. And so the next step is what you do with it and how you make value out of it.

 

5. Should we build it ourselves or should we buy this off-the-shelf solution?

Filip Heitbrink (Scilife)

Because they think if we build it ourselves. What is the value?

What's the development cost? We develop it once done and what's the validation costs I validated once I'm done and I don't have to pay the SAAS solution year after year constantly. So after X amount of time, there is this net benefit of not acquiring this.

Typically according to business needs and also regulation of differences or changes, it's a continuous development requirement that you will see. And also the validation part is crazy expensive and time consuming. So the way we do it is. For instance, if we have a change control in place in which we solve the bug, we test the bug and we release it very fast to production.

If there is a new features, we do it in a medium or in a major release. So it's a continuous improvement cycle. With the compliance requirements on top of that, if you look at other commercial applications that are sold in a non-regulatory space, you see this continuous deployment strategy.

Literally the developer makes a change in the code. So something as a feature, it goes through a testing pipeline. If everything is green, it hits production the same day. That's something that we can never achieve because there is this requirement of signing off on documentation by different people before you can actually deploy the production.

So what we do is we have this continuous deployment up to test server internally for us there, we execute our PQTs. We make sure that everything is correct on our side. Then we deploy it through the validation environment of our customer's customers. Then, we have according to contract 60 days time, to verify the new release, according to the change list, according to the tests that they want to do on their end to make sure everything is okay.

After 60 days, the module goes to life for all customers at the same time, this is how we can get a SAS solution validated in a regulated space, constantly improving it. And it's a very cost effective for the customer because you do not pay for this validation because it's included in the price and it's much more efficient than trying to do this internally, creating this new version, having to validate that upgrading all that stuff.

If you're thinking about building versus buying, try to think how often a release might be required and how you need to revalidate that. People are typically there to make sure infrastructure keeps running, not to build critical tools with everything around it and validate and create new releases.

If you're a very big company, it might be the case. But normally it's not.

 

6. How often do you have new releases on Scilife?

Filip Heitbrink (Scilife)

We have now 12 modules. We do 1, 2 or 3 new releases per module per year. Because there are always improvements that we keep doing. And we are also adding now two new modules according to our roadmap. And of course on the roadmap, we have more modules for next year. If we count an average of two new releases per module and then the new modules that we develop it's about 25 releases per year.

So that's almost two releases per month. Our main goal as a SAS provider is to really become world class into developing the needs of the market. In a very, very fast way. Then provide them to the QA department for the validation, improve the validation process constantly to make it as efficient and short and well-documented as possible.

And then, deployments are automated. Deployed immediately to the Val environment. And there, the 60 days start counting. It was 30 days in the past. For larger customers normally 30 days is not enough. So we increase it through 60 days.

Those 60 days are non-negotiable. We have to wait for 60 days in a commercial environment where we can just push whenever we want. If it's well-tested, it hits production and you have your new feature. People are happy because it constantly improves.

We have this requirement: if you want the feature, we have to develop it validated, give you 60 days time to do the validation on your end, and then it hits production. It's a drawback, but it's still quite fast.

 

7. How do you manage the requests from different customers about your modules and your solution?

Filip Heitbrink (Scilife)

That's a good question. We have this whole feature request process in place because from day one, we had a very lean approach to software development. We did not want to build something that we came up with, that we make up in order to just build whatever we think the market needs.

We came out with a very thin version, 1.0 and then went to market pretty fast. With something that didn't have many features, but it solved a very specific problem with document control. And at the trainings module events encompass change control. We had the five modules we came out with but with a very relatively simple version.

With our customer success team, we constantly go through our ticketing system, we constantly get requests from customers. Hey, this would be a nice feature. I'm missing this. I would improve that. Now we're constantly managing that.

I think we have 60 new feature requests monthly which are going through a process. Internally the customer success team, with the product team talks about each and every feature requests. And besides this, this makes sense for all the customers as a whole because it needs to make sense for everybody.

It cannot be features very specific to one customer use case. Then we have to say no. If it's a very important feature to that customer, we can even decide to implement it and make it configurable in such a way that you have it off by default. And you can turn it on if there are some other customer in the future that also needs it.

But by default, we go through the feature requests. We decide if we're going to do it. If yes, when, in which module version of people can expect it. And then we plan it in our product roadmap. So that's how we go through the whole process in a systematized way, because it really leads our product development roadmap.

Feedback is a gift.

 

8. Is AWS security not highly determined by the customers own software architecture of appliances in the cloud?

Patrick Lamplé (AWS)

You can explore the AWS website security section to learn how customer can protect themselves and how AWS take care of the security on its part of Securing the Cloud https://aws.amazon.com/security/. You can get AWS experience from our principal engineers in our Builders Library https://aws.amazon.com/builders-library. It contains articles and videos where explain how we build and run AWS services.

9. Could you please throw some light on the practical aspects of data security? How does vulnerabilities present in a hosted product affect the overall data security of AWS?

 
Patrick Lamplé (AWS)

I am sure there are thousands of customers hosting several products. How is this managed by AWS to ensure there is no domino effect? You can explore the AWS website security section to learn how customer can protect themselves and how AWS take care of the security on its part of Securing the Cloud https://aws.amazon.com/security. You will explore the different aspect of security and compliance, including the Amazon Data Centers https://aws.amazon.com/compliance/data-center/data-centers.
 

10. How do you both specifically guard against ransomware?

Patrick Lamplé (AWS)

 

Want to
Know More?

We’re here to answer all your questions
Let's chat