you are locking down a single piece of hardware just for your purposes.
Dedicated Instance does not work like this. Your instance runs on some dedicated hardware. Its not lockdown to you. If you stop/start instance, you can get some other hardware somewhere else. Basically, the hardware is "yours" (you are not sharing it with others) for the time your instance is running. You stop/start it, you may get different physical machine later on (maybe older, maybe newer, maybe its specs will be a bit different), and so on. So your instance is moved around on different physical servers - whichever is not occupied by others at the time.
With Dedicated Host the physical server is basically yours. It does not change, it's always the same physical machine for as long as you are paying.
Answer from Marcin on Stack OverflowVideos
Is it possible to launch an Ec2 instance and attach a role to it which allows multiple AWS accounts to perform an action on an AWS service?
For example, allowing/attaching a role to an EC2 instance, enabling it to update cost anomaly monitor that are created in all AWS accounts. So one role that allows us to update a service that's part of all accounts. Is it possible?
you are locking down a single piece of hardware just for your purposes.
Dedicated Instance does not work like this. Your instance runs on some dedicated hardware. Its not lockdown to you. If you stop/start instance, you can get some other hardware somewhere else. Basically, the hardware is "yours" (you are not sharing it with others) for the time your instance is running. You stop/start it, you may get different physical machine later on (maybe older, maybe newer, maybe its specs will be a bit different), and so on. So your instance is moved around on different physical servers - whichever is not occupied by others at the time.
With Dedicated Host the physical server is basically yours. It does not change, it's always the same physical machine for as long as you are paying.
Dedicated Host
As soon as you 'allocate' a Dedicated Host, you start paying for that whole host.
A host computer is very big. In fact, it is the size of the largest instance of the selected family, but can be divided-up into smaller instances of the same family. ("You can run any number of instances up to the core capacity associated with the host.")
Any instances that run on that Host are not charged, since you are already being billed for the Host.
That is why a Dedicated Host is more expensive than a Dedicated Instance -- the charge is for the whole host.
Dedicated Instance
"Dedicated Instances are Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that's dedicated to a single customer... Dedicated Instances may share hardware with other instances from the same AWS account that are not Dedicated Instances."
This means that no other AWS Account will run an instance on the same Host, but other instances (both dedicated and non-dedicated) from the same AWS Account might run on the same Host.
Billing is per-instance, with a cost approximately 10% more than the normal instance charge (but no extra charge if it is the largest instance in the family, since it requires the whole host anyway).
It is not possible to share a single EBS volume between multiple EC2 instances.
Your diagram is offloading the data to a shared server. However, this shared server is simply another single-point-of-failure. So you're not saving yourself anything: if the AZ of that server goes down, then you've lost the data, even if the web server/VisualSVN server in another AZ is still running.
You should split your server between it's two separate functions into two separate servers/clusters so they can be handled independently of each other:
- web server, and
- VisualSVN server
For the web server, do you really need to mirror the volume in a multi-instance scenario, or can you keep your instances anytime-terminatable without data loss? Ideally, you would not save any data locally to the instance. Instead, you would save all data off-server to a database or to Amazon S3. This way, the data is available to all instances, all the time. If the server is lost, none of the data is. Make your "master" AMI and create all instances in an auto-scaling group from that master AMI. When your web server code changes, deploy a new AMI, terminate the old instances and create new ones from the new AMI.
For the VisualSVN server, the question to ask is whether VisualSVN can handle volume data changing on it without the running process caring about it. For example, if the running process caches some data in RAM, what happens if some hard drive sync process comes along behind it's back and changes the hard drive on it? It could be that the VisualSVN server simply is not able to handle a multi-instance scenario. Depending on the answer to that, you may not be able to cluster the VisualSVN server. It's possible that VisualSVN server has it's own clustering feature. If so, then you should investigate that.
This is a use case that has been sought after for quite a while in AWS. As is described in this thread, two common ways to accomplish this was to use S3 or NFS to share data access between instances.
On April 9th 2015, Amazon announced Amazon Elastic File System (Amazon EFS), which provides what you are asking for in your diagram.
As mentioned in a comment, AWS has announced EFS (http://aws.amazon.com/efs/) a shared network file system. It is currently in very limited preview, but based on previous AWS services I would hope to see it generally available in the next few months.
In the meantime there are a couple of third party shared file system solutions for AWS such as SoftNAS https://aws.amazon.com/marketplace/pp/B00PJ9FGVU/ref=srh_res_product_title?ie=UTF8&sr=0-3&qid=1432203627313
S3 is possible but not always ideal, the main blocker being it does not natively support any filesystem protocols, instead all interactions need to be via an AWS API or via http calls. Additionally when looking at using it for session stores the 'eventually consistent' model will likely cause issues.
That being said - if all you need is updated resources, you could create a simple script to run either as a cron or on startup that downloads the files from s3.
Finally in the case of static resources like css/images don't store them on your webserver in the first place - there are plenty of articles covering the benefit of storing and accessing static web resources directly from s3 while keeping the dynamic stuff on your server.
From what we can tell at this point, EFS is expected to provide basic NFS file sharing on SSD-backed storage. Once available, it will be a v1.0 proprietary file system. There is no encryption and its AWS-only. The data is completely under AWS control.
SoftNAS is a mature, proven advanced ZFS-based NAS Filer that is full-featured, including encrypted EBS and S3 storage, storage snapshots for data protection, writable clones for DevOps and QA testing, RAM and SSD caching for maximum IOPS and throughput, deduplication and compression, cross-zone HA and a 100% up-time SLA. It supports NFS with LDAP and Active Directory authentication, CIFS/SMB with AD users/groups, iSCSI multi-pathing, FTP and (soon) AFP. SoftNAS instances and all storage is completely under your control and you have complete control of the EBS and S3 encryption and keys (you can use EBS encryption or any Linux compatible encryption and key management approach you prefer or require).
The ZFS filesystem is a proven filesystem that is trusted by thousands of enterprises globally. Customers are running more than 600 million files in production on SoftNAS today - ZFS is capable of scaling into the billions.
SoftNAS is cross-platform, and runs on cloud platforms other than AWS, including Azure, CenturyLink Cloud, Faction cloud, VMware vSPhere/ESXi, VMware vCloud Air and Hyper-V, so your data is not limited or locked into AWS. More platforms are planned. It provides cross-platform replication, making it easy to migrate data between any supported public cloud, private cloud, or premise-based data center.
SoftNAS is backed by industry-leading technical support from cloud storage specialists (it's all we do), something you may need or want.
Those are some of the more noteworthy differences between EFS and SoftNAS. For a more detailed comparison chart:
https://www.softnas.com/wp/nas-storage/softnas-cloud-aws-nfs-cifs/how-does-it-compare/
If you are willing to roll your own HA NFS cluster, and be responsible for its care, feeding and support, then you can use Linux and DRBD/corosync or any number of other Linux clustering approaches. You will have to support it yourself and be responsible for whatever happens.
There's also GlusterFS. It does well up to 250,000 files (in our testing) and has been observed to suffer from an IOPS brownout when approaching 1 million files, and IOPS blackouts above 1 million files (according to customers who have used it). For smaller deployments it reportedly works reasonably well.
Hope that helps.
CTO - SoftNAS