Author Topic: hosting requirements and strategies  (Read 2721 times)

pocock

  • Sr. Member
  • ****
  • Posts: 280
  • Karma: +31/-0
    • View Profile
hosting requirements and strategies
« on: July 21, 2020, 11:38:18 am »

Can anybody comment on ideas or case studies for hosting with the Talos II platform?

For example:

What type of workloads is it most suitable for, or how would you build a Talos II-based rack mount system for different types of workloads?

Many hosting companies sell space by full rack, half rack, quarter rack (approximately) - how would you fit out one of these spaces if you had a pure Talos II strategy, even using the platform for firewall and BGP perhaps?

The largest vendors can provide next day and sometimes same day replacement parts, worldwide.  How would you approximate this with RCS products?  One idea that comes to mind: keeping one or two "community" servers in each rack for developers to log in and run tests, running Jenkins (a.k.a. travis-ci for POWER9) on the understanding that these servers can be scavanged on short notice to resolve production outages.

Is there any interest in hosting here in Switzerland right now?
Debian Developer
https://danielpocock.com

ClassicHasClass

  • Sr. Member
  • ****
  • Posts: 443
  • Karma: +34/-0
  • Talospace Earth Orbit
    • View Profile
    • Floodgap
Re: hosting requirements and strategies
« Reply #1 on: July 31, 2020, 10:08:00 am »
What attracts me to Power-based hosting most is the familiarity of the architecture and the smaller attack surface. I also think that more security conscious customers would also favour the auditable firmware and the better Spectre and Meltdown mitigations, especially on shared hosts. The problem is that there isn't a great deal of such hosting available (Integricloud, of course, but relatively few others) and only highly technical folks like us are aware of these differences.

pocock

  • Sr. Member
  • ****
  • Posts: 280
  • Karma: +31/-0
    • View Profile
Re: hosting requirements and strategies
« Reply #2 on: July 31, 2020, 11:43:21 am »
There are a number of Swiss data centers within walking distance from my home.  If I don't feel like walking, I can reach any of them in less than 5 minutes on my motorbike.  If anybody wanted to collaborate on setting up a rack of Talos II servers in one of these, I'd be happy to collaborate on that and try to come up with a support agreement that is proportional to the experimental nature of the platform.  For example, call-out fees for reboots and such things could be minimized or waived if these fit within my visits to the sites.
Debian Developer
https://danielpocock.com

SiteAdmin

  • Administrator
  • *****
  • Posts: 41
  • Karma: +15/-0
  • RCS Staff
    • View Profile
Re: hosting requirements and strategies
« Reply #3 on: July 31, 2020, 03:47:32 pm »

Can anybody comment on ideas or case studies for hosting with the Talos II platform?

For example:

What type of workloads is it most suitable for, or how would you build a Talos II-based rack mount system for different types of workloads?

Many hosting companies sell space by full rack, half rack, quarter rack (approximately) - how would you fit out one of these spaces if you had a pure Talos II strategy, even using the platform for firewall and BGP perhaps?

The largest vendors can provide next day and sometimes same day replacement parts, worldwide.  How would you approximate this with RCS products?  One idea that comes to mind: keeping one or two "community" servers in each rack for developers to log in and run tests, running Jenkins (a.k.a. travis-ci for POWER9) on the understanding that these servers can be scavanged on short notice to resolve production outages.

Is there any interest in hosting here in Switzerland right now?

This is an interesting discussion, thank you for starting it!

A couple of points we'd like to make:

1.) Our sister companies have been running racks of POWER servers for many years now, including Raptor Engineering and Integricloud.  The latter is US based but offers VPS and bare metal options; the reliability data for those machines has come back as very, very good.

2.) We can do custom (paid) support contracts for things like overnight replacement parts.   We just don't get that many organizations that feel they require it, so it's normally negotiated on a one-off basis.  Our suspicion is that many organizations try to keep spare parts on hand so as to avoid ending up in an emergency call out / parts acquisition scenario, but we don't have hard data to validate that.

pocock

  • Sr. Member
  • ****
  • Posts: 280
  • Karma: +31/-0
    • View Profile
Re: hosting requirements and strategies
« Reply #4 on: July 31, 2020, 04:12:05 pm »
2.) We can do custom (paid) support contracts for things like overnight replacement parts.   We just don't get that many organizations that feel they require it, so it's normally negotiated on a one-off basis.  Our suspicion is that many organizations try to keep spare parts on hand so as to avoid ending up in an emergency call out / parts acquisition scenario, but we don't have hard data to validate that.

That is why I suggested including some community servers in a local rack here.  They could be cannibalized at short notice to provide spare parts for more critical systems.  A spare server running Jenkins (a.k.a travis-ci on ppc64le) may be better than a spare server in a box.

The next-day swap out is OK in the US but in most other regions, it is hard to get delivery under 2 working days and even then it can be incredibly expensive.
Debian Developer
https://danielpocock.com

ClassicHasClass

  • Sr. Member
  • ****
  • Posts: 443
  • Karma: +34/-0
  • Talospace Earth Orbit
    • View Profile
    • Floodgap
Re: hosting requirements and strategies
« Reply #5 on: August 02, 2020, 10:50:33 pm »
Quote
Our suspicion is that many organizations try to keep spare parts on hand so as to avoid ending up in an emergency call out / parts acquisition scenario

As someone who is generally a self-hoster (the server room in this house is a converted spare bedroom), I definitely like having spare parts. In 2014 my main POWER6 blew its planar and it took several days to get a replacement -- I had to move everything over to the old 604e for most of the week -- so now I keep spares.