I want to buy hardware for VDI, virtualization and general cloud-services. (hardware will be colocated.) I need some help/advice on my current setup; I already have a (semi-succesfull) IT company, I have a customer base, advertising, cashflow, connections,
etc. Fort starters I will be using one colo location, the DC's in my country are one of the most reliable in the world. And so is the infrastructure. My DC is one of the bigger ones, and I will either go with a private rack or with 2 seperate ones shared with
my clients. (So I will be the only one placing hardware). There are no natural disasters or things like that, and DC's going down is extremely rare, and only happens to with startups/smaller shops. The 20.000$ is purely for hardware. I already decided to go
with the Windows Server/Hyper-V platform, because of the new technologies that make "decent" low-budget setups possible. I cannot afford a SAN so I will need to use a different storage solution that utilizes the server2012r2 capabilities. I might
have been a little unrealistic regarding my requirements, and they are more of goals anyway. Let my try again; My main goal will be virtualization and delivering VDI. (zero-clients) Second to that I would like to be able to deliver some cloud-services (backup,
hosting, sharing, remote-desktop, etc.) to eventually offer a all-in-one package. I understand that with my budget I will not be able to do immediately be able to offer all these things at maximum performance and reliability. However, I should be able to lay
a solid foundation for future investments, and probably even be able to offer VDI at a small scale? Some things to take into account before sharing your advice; These services will not be offered publicly. I will start with my current customerbase and work
up from there. Most of my customers have minimum storage requirements. So I'm okay with making concessions on the amount of storage I can offer, as long as i can easily add more storage in the future. Most clients have very small bandwidth (8-60mb download)
that's why I want to build my architecture around the zero-client model. That way I can at-least offer everyone of my clients my services. The downside of this is of course that i am very dependent on latency, which is why I think local SSD's (combined with
HDD's) could be my answer. I am aware about the SSD lifespan, but I will be choosing only the drives that have proven to sustain some heavy-duty punishment. (Check THIS out, Hardcore SSD edurance test, 600TB so far, and not one has failed, not even TLC based).
If each of the consumer grade SSD's in that test can sustain over halve a petabyte of non-stop writes without breaking a sweat, (except for the TLC maybe) and probably make it to a petabyte without dying, they will serve my cause just fine. Now I've made some
decisions so far; The E3 does not offer enough ram for future expansion, so I will be going for E5 platform. Since DP MB's are not that much more expensive then the single ones, I will probably go with dual socket. Very flexible in terms of expansion, and
not that more expensive compared to single socket. Now the big question for me at this point is; a Can i use the DP e5 nodes for storage as well, by adding local storage in every node, and use clustering for redundancy and sharing? What are the con's versus
dedicated storage servers? Would it be better to use separate nodes just for storage? if so, why? This would cost me allot more money, and the all-in-one option would be allot more efficient. If i'm going the all-in-one route, I will be probably be going with
10gb/e aswell, and put each node in at least a 2u form-factor for more drives, expansion possibilities, and ease of management. I am aware that I will have a smaller number of nodes in total, but because each node serves so many purposes i still think this
would be the best route. Especially if you consider that eventually I will need to step up to 10gb/e anyway, and in the future I can always buy more nodes for extra redundancy. And I can start with nodes that are only halve-filled, and work up from there,
and if one of the nodes does go down, I could temporarily move the CPU, RAM, etc. to the remaining nodes to give them some more power until i fix the problem with the node that has failed. So, that is my plan as of now, feel free to criticize me on anything
you would do different. If something in my setup is impossible, or impractical, please say so, and tell me what would be a better alternative. Many thanks for the responses so far, Regards, Cloudbuilder
↧