{"id":2028,"date":"2019-10-07T06:34:41","date_gmt":"2019-10-07T10:34:41","guid":{"rendered":"http:\/\/soul-repairs.com\/?p=2028"},"modified":"2019-10-10T16:07:22","modified_gmt":"2019-10-10T20:07:22","slug":"kubernetes-openshift-resource-protection-with-limit-ranges-and-resource-quotas","status":"publish","type":"post","link":"https:\/\/soul-repairs.com\/blog\/2019\/10\/07\/kubernetes-openshift-resource-protection-with-limit-ranges-and-resource-quotas\/","title":{"rendered":"Kubernetes\/OpenShift Resource Protection with Limit Ranges and Resource Quotas"},"content":{"rendered":"<p>One of the most crucial metrics of success for an enterprise application platform is if the platform can protect: a) the applications running on it, and b) itself (and its underlying infrastructure).<em> A<\/em>ll threats to an application platform eventually come from something <em>within<\/em> that platform &#8211; an application can be hacked, and then it attacks other applications; or there could be a privilege escalation attack going after the underlying host infrastructure; or an application can accidentally hoard platform resources, choking out other apps from being able to run.<\/p>\n<p><!--more--><\/p>\n<p>The solution to this is some amount of isolation &#8211; like quarantining someone who&#8217;s sick so they don&#8217;t get <em>other<\/em> people sick. And one of the reasons we love OpenShift is that it does exactly this &#8211; it effectively isolates applications. On OpenShift, applications can operate in their defined box o&#8217; resources, and they usually can&#8217;t destroy each other or the underlying platform.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-3108 aligncenter\" src=\"https:\/\/soul-repairs.com\/blog\/wp-content\/uploads\/2019\/10\/working-diagrams-Page-2-300x181.png\" alt=\"\" width=\"591\" height=\"357\" srcset=\"https:\/\/soul-repairs.com\/blog\/wp-content\/uploads\/2019\/10\/working-diagrams-Page-2-300x181.png 300w, https:\/\/soul-repairs.com\/blog\/wp-content\/uploads\/2019\/10\/working-diagrams-Page-2-768x463.png 768w, https:\/\/soul-repairs.com\/blog\/wp-content\/uploads\/2019\/10\/working-diagrams-Page-2-1024x618.png 1024w, https:\/\/soul-repairs.com\/blog\/wp-content\/uploads\/2019\/10\/working-diagrams-Page-2-448x270.png 448w, https:\/\/soul-repairs.com\/blog\/wp-content\/uploads\/2019\/10\/working-diagrams-Page-2.png 1200w\" sizes=\"auto, (max-width: 591px) 100vw, 591px\" \/><\/p>\n<p>In addition to keeping the applications, the platform, and the infrastructure more secure, this <em>also<\/em> enables the power of <strong>ownership <\/strong>and at least one piece of the goal that is &#8220;DevOps.&#8221; Because applications are inherently more isolated, infrastructure admins can spend <em>less<\/em> time and energy worrying about applications, and <em>more<\/em> time focusing on\u00a0keeping the platform healthy and secure.<\/p>\n<p>This is part of why developer self-service is possible, because the applications in one namespace\/project can&#8217;t easily hurt the applications in another. One of the most important pieces of any Kubernetes-based container platform being successful is properly setting up resource boundaries and sandboxing. In OpenShift, these resource boundaries are called <a href=\"https:\/\/docs.openshift.com\/container-platform\/3.11\/dev_guide\/compute_resources.html\">LimitRanges and ResourceQuotas<\/a>. These are set at the <em>project<\/em> level &#8211; and it&#8217;s super important to set these limits on <strong>every<\/strong> project. You never know when someone will deploy some really inefficient code or have a massive increase in workload &#8211; and that&#8217;s the great part about these. You don&#8217;t <em>have<\/em> to know.<\/p>\n<p>Like many OpenShift features, these are also part of Kubernetes itself. Pretty much everything we&#8217;ll say from here on out applies to Kubernetes as much as OpenShift.<\/p>\n<h2>Overview<\/h2>\n<p>Clusters (running instances of OpenShift) have <em>physical<\/em> limitations for their CPU and memory. Somewhere under each cluster is <em>some<\/em> real hardware that has an actual, real maximum amount of resources because of what the hardware contains. Within a cluster, are <strong>projects<\/strong> (namespaces, buckets in which applications are logically grouped) which contain\u00a0<strong>pods<\/strong> (running instances of an application).<\/p>\n<p><span style=\"font-weight: 400;\">There are three different types of resource boundaries in OpenShift. Collectively, they&#8217;re sometimes called &#8220;capping.&#8221;\u00a0<\/span><\/p>\n<ul>\n<li><a href=\"https:\/\/docs.openshift.com\/container-platform\/3.11\/admin_guide\/quota.html\">Resource Quotas<\/a> &#8211; these are boundaries around CPU, memory, storage, and object counts as a\u00a0<em>total<\/em> for a single project. This would translate to, &#8220;Project X can have 4 cores to play with, and we don&#8217;t particularly care\u00a0<em>what<\/em> they do with it.&#8221;<\/li>\n<li><a href=\"https:\/\/docs.openshift.com\/container-platform\/3.11\/dev_guide\/compute_resources.html#dev-compute-resources\">Limit Ranges<\/a> &#8211; these are maximum, minimum, and sometimes default values that specific <strong>types of objects<\/strong> can have when they start up, also set for a single project. If any defined type of object (pods, containers, images, a few more) tries to start up outside these values, they won&#8217;t be able to. This would translate to, &#8220;Within Project X, pods may have between 0.5 cores and 1 core when they start up.&#8221;<\/li>\n<li><a href=\"https:\/\/docs.openshift.com\/container-platform\/3.11\/dev_guide\/compute_resources.html#dev-compute-resources\">Resource Requests and Resource Limits<\/a> &#8211; Pod-specific definitions that declare the amount of resources a pod has dibs on (requests) and promises not to use more than (limits). These are\u00a0<em>on<\/em> the pod object specification.\n<ul>\n<li>Quotas also have resource requests and resource limits &#8211; but as a <em>total\u00a0<\/em>for the entire project. That&#8217;s how the CPU, memory, and storage values are expressed.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><span style=\"font-weight: 400;\">The different types of capping all work together in a kind of hierarchy. Limit Ranges must work within the boundaries defined by the Resource Quota, and Resource Requests and Resource Limits are&#8230;well,\u00a0<em>limited<\/em> by both Limit Ranges\u00a0<em>and<\/em> Resource Quotas.<\/span><\/p>\n<p>In general, we recommend keeping the limits pretty lax unless you&#8217;re willing to do a whole lot of tuning and testing. The idea is general protection and defined boundaries, not perfect puppet strings.<\/p>\n<p>OpenShift <em>will<\/em> allow a cluster to overcommit on resource limits (how much of a resource will be used by an object), but it&#8217;s impossible to overcommit resource <em>requests <\/em>(how much of a resource can be claimed at startup of an object). <span style=\"font-weight: 400;\">Any attempt to start an object (we&#8217;ll talk mostly about pods from here on out, just for simplicity&#8217;s sake &#8211; but OpenShift&#8217;s capping is intricate and can apply to more objects as well) using a resource request higher than the total resource request for the project (which is defined on the resource quota) will cause the pod to fail to start. So&#8230;it&#8217;s very important to carefully do some math to set the resource requests on the quota for the project &#8211; because otherwise pods won&#8217;t start.<\/span><\/p>\n<figure id=\"attachment_3111\" aria-describedby=\"caption-attachment-3111\" style=\"width: 828px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-3111\" src=\"https:\/\/soul-repairs.com\/blog\/wp-content\/uploads\/2019\/10\/quotas-cluster-300x155.png\" alt=\"\" width=\"828\" height=\"428\" srcset=\"https:\/\/soul-repairs.com\/blog\/wp-content\/uploads\/2019\/10\/quotas-cluster-300x155.png 300w, https:\/\/soul-repairs.com\/blog\/wp-content\/uploads\/2019\/10\/quotas-cluster-768x396.png 768w, https:\/\/soul-repairs.com\/blog\/wp-content\/uploads\/2019\/10\/quotas-cluster-1024x528.png 1024w, https:\/\/soul-repairs.com\/blog\/wp-content\/uploads\/2019\/10\/quotas-cluster-524x270.png 524w, https:\/\/soul-repairs.com\/blog\/wp-content\/uploads\/2019\/10\/quotas-cluster.png 1408w\" sizes=\"auto, (max-width: 828px) 100vw, 828px\" \/><figcaption id=\"caption-attachment-3111\" class=\"wp-caption-text\">In a running cluster, it looks a little something like this!<\/figcaption><\/figure>\n<h2>What is Overcommitting? Why would you want to Overcommit?<\/h2>\n<p>Overcommitting is allowing the sum of all the possible defined resource boundaries to be higher than the <em>physical<\/em> resources of the OpenShift cluster. This can be done with both Memory and CPU limits. This would happen if limits were set such that if every project on a cluster were to reach its defined limit, the cluster&#8217;s would run out of physical compute capacity.<\/p>\n<blockquote><p>Overcommitting: allowing the sum of all the possible defined resource boundaries to be higher than the physical resources of the OpenShift cluster.<\/p><\/blockquote>\n<p>What happens when this is done? Well&#8230;nothing explodes, actually. The cluster will hand out CPU in priority order, and things will slow down a bit &#8211; but they <strong>don&#8217;t<\/strong> stop. <span style=\"font-weight: 400;\">If the cluster runs out of CPU, it divides what CPU it has available between all nodes &#8211; <\/span><a href=\"https:\/\/docs.openshift.com\/container-platform\/3.6\/admin_guide\/overcommit.html#overcommit-cpu\"><span style=\"font-weight: 400;\">per the CPU resource limits set per pod<\/span><\/a><span style=\"font-weight: 400;\">. <\/span>OpenShift doesn&#8217;t allow any one application to eat all CPU &#8211; assuming you let your applications all have the same priority, which is the default. Like everything else in OpenShift, <em>there&#8217;s a way to change the default and it&#8217;s probably not what you want to do.<\/em><\/p>\n<p>So why would you want to overcommit? Well, it&#8217;s either that, or build out a cluster that can handle the maximum capacity limit of every application, all the time. Applications are not going to typically run\u00a0<em>quite<\/em> this hot, so&#8230;it&#8217;s usually a waste of hardware to do that. We could write a whole <em>other<\/em> blog post about capacity and capacity planning, but as a very loose guideline &#8211; shoot for 25% more capacity than your average daily peak load, and plot usage trends over time so you know when you need to add more. In addition, monitor pod count and memory usage, and find out if your limiting factor &#8211; what your applications eat up the most of &#8211; is memory or CPU.<\/p>\n<h2>Example: <em>StarBuilder<\/em><\/h2>\n<p><span style=\"font-weight: 400;\">Let&#8217;s walk through some examples to play with these concepts.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">An app team (StarBuilder Dev Team) has a sweet application, <\/span><i><span style=\"font-weight: 400;\">StarMaker, <\/span><\/i><span style=\"font-weight: 400;\">that creates stars, writes them to the Universe database, and notifies the universe when a new star is created.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Their application runs in a OpenShift cluster, in the project <i>StarBuilder.<\/i><\/span><span style=\"font-weight: 400;\"> There are also other projects in the cluster: <em>PlanetBuilder<\/em>, <em>AsteroidBuilder<\/em>, and <em>MoonBuilder<\/em>.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">The cluster has a physical limitation of <\/span><b>120 cores<\/b><span style=\"font-weight: 400;\"> and <strong>1024 GB of memory<\/strong>, for these <\/span><b>4 projects<\/b><span style=\"font-weight: 400;\"> that are running.<\/span><\/p>\n<figure id=\"attachment_3113\" aria-describedby=\"caption-attachment-3113\" style=\"width: 922px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\" wp-image-3113\" src=\"https:\/\/soul-repairs.com\/blog\/wp-content\/uploads\/2019\/10\/quotas-3-cluster2-300x183.png\" alt=\"\" width=\"922\" height=\"563\" srcset=\"https:\/\/soul-repairs.com\/blog\/wp-content\/uploads\/2019\/10\/quotas-3-cluster2-300x183.png 300w, https:\/\/soul-repairs.com\/blog\/wp-content\/uploads\/2019\/10\/quotas-3-cluster2-768x468.png 768w, https:\/\/soul-repairs.com\/blog\/wp-content\/uploads\/2019\/10\/quotas-3-cluster2-1024x624.png 1024w, https:\/\/soul-repairs.com\/blog\/wp-content\/uploads\/2019\/10\/quotas-3-cluster2-443x270.png 443w, https:\/\/soul-repairs.com\/blog\/wp-content\/uploads\/2019\/10\/quotas-3-cluster2.png 1408w\" sizes=\"auto, (max-width: 922px) 100vw, 922px\" \/><figcaption id=\"caption-attachment-3113\" class=\"wp-caption-text\">StarBuilder&#8217;s OpenShift environment. Spoilers!<\/figcaption><\/figure>\n<h3>StarBuilder&#8217;s Quota &#8211; Resource Limit and Pod Count<\/h3>\n<p><span style=\"font-weight: 400;\">The cluster admins have set quotas on each project, which include CPU and memory resource limits, and a total allowed pod count. These resource limits require pods to <\/span><i><span style=\"font-weight: 400;\">also<\/span><\/i><span style=\"font-weight: 400;\"> have resource limits.<\/span><\/p>\n<blockquote><p>StarBuilder CPU Resource Limit: 100 cores<\/p><\/blockquote>\n<p>Remember, project-level CPU resource limits as defined by a quota translates to the <strong>maximum total amount of CPU a project can use &#8211; <\/strong>measured by the sum of the defined resource limits of its pods. So, in this case, if pod CPU resource limits were set at 1 core each, we couldn&#8217;t run more than 100 pods. This lines up nicely with the maximum number of pods in the project.<\/p>\n<p>As mentioned above, cluster physical resources can be over-committed on CPU and memory over what&#8217;s actually available. In this case, the total project limits come out to (100 + 50 + 200 + 50) 400 cores, but the cluster&#8217;s physical capacity is only 120 cores. <span style=\"font-weight: 400;\">If the cluster runs out of CPU because of this, like we said &#8211; it will divide the CPU it has available between all nodes &#8211; <\/span><a href=\"https:\/\/docs.openshift.com\/container-platform\/3.6\/admin_guide\/overcommit.html#overcommit-cpu\"><span style=\"font-weight: 400;\">per the CPU resource limits set per pod<\/span><\/a><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">However, a limit of 100 cores for the StarBuilder project <\/span><span style=\"font-weight: 400;\">leaves plenty of cluster capacity left over even if the entire project&#8217;s apps all have CPU issues.<\/span><\/p>\n<h3>StarBuilder&#8217;s Quota &#8211; Resource Request<\/h3>\n<p>Let&#8217;s say we set the StarBuilder&#8217;s project CPU resource request to be 20 cores. Remember, we<em>cannot overcommit resource requests, <\/em>so the sum of all resource requests across all projects <strong>can&#8217;t<\/strong> be more than 120.<\/p>\n<blockquote><p>StarBuilder&#8217;s CPU Resource Request: 20 cores<\/p><\/blockquote>\n<p><span style=\"font-weight: 400;\">Because cluster resource requests set on project quotas <\/span><b>cannot be overcommitted, <\/b>it&#8217;s important to evenly divide<span style=\"font-weight: 400;\"> cluster resource requests between projects relative to how much actual resources on the cluster each project is likely to need.<\/span><\/p>\n<h2>Scenarios<\/h2>\n<p>Let&#8217;s play around with some attempts to cap CPU and walk through what would happen.<\/p>\n<h3>Try #1: CPU Resource Limit only<\/h3>\n<p><span style=\"font-weight: 400;\">The <em>StarBuilder<\/em> Team sets their <\/span><b>CPU resource limit to 1<\/b> <strong>core<\/strong> as a default on all pods. <span style=\"font-weight: 400;\">They <strong>don&#8217;t<\/strong> set a default CPU resource request for pods.<\/span><\/p>\n<p>&#8230;unfortunately, if no resource request is provided for a pod, <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/memory-default-namespace\/#what-if-you-specify-a-containers-limit-but-not-its-request\"><span style=\"font-weight: 400;\">Kubernetes uses the limit <strong>as<\/strong> the request<\/span><\/a>. This means that their pods now call dibs on (request) 1 full core when they start &#8211; because that&#8217;s what the limit is set to. The project only has 20 cores at its disposal for dibs&#8217;ing, so&#8230;whenever the project tries to start its 21st pod, it and all future pods <em>definitely won&#8217;t start<\/em>.<\/p>\n<h3>Try #2: CPU Resource Limit AND CPU Resource Request<\/h3>\n<p><span style=\"font-weight: 400;\">The <em>StarBuilder<\/em> Team sets their CPU resource request .01 cores for each pod. The pod starts up with .01 core allocated to it, and it can flex up to its resource limit, which is still set to 1 core.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">They scale up their application to thirty pods, which means their project now has allocated 30 cores of CPU resource limit and .3 cores of CPU resource request. So far so good!<\/span><\/p>\n<p><span style=\"font-weight: 400;\">They start allowing requests into their application, and under heavy load their pods run up to 1 core each, for that resource limit total of 30 cores, and <\/span><b>the cluster is largely unaffected while this extra load happens<\/b><span style=\"font-weight: 400;\">.\u00a0<\/span><\/p>\n<blockquote><p>Tada! Quota success!<\/p><\/blockquote>\n<h3>More Applications: StarConfigurator and StarAnalyzer<\/h3>\n<p><span style=\"font-weight: 400;\">Okay, ready for more applications within this project?<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Team <em>StarBuilder<\/em> makes a second app in their project to join the <em>StarMaker<\/em> application: <\/span><i><span style=\"font-weight: 400;\">StarConfigurator<\/span><\/i><span style=\"font-weight: 400;\">, which is an app that can be use to customize star attributes and dimensions. They also make a third application, <\/span><i><span style=\"font-weight: 400;\">StarAnalyzer<\/span><\/i><span style=\"font-weight: 400;\">, which does analytics on stars and on the star database.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">They attempt to spin up 20 more pods, 10 for each of the new apps &#8211; using the same pod resource limit of 1 core\/pod and resource request of .01 core\/pod. This is an addition of 20 cores of CPU resource limit and .20 cores of resource request. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">This brings them to a total of 50 cores of CPU resource limit used of their 100 allowed, and .5 cores of CPU resource request used of their 20 allowed.\u00a0<em>Still rocking and rolling.<\/em><\/span><\/p>\n<h3><b>Going Forward with this Cluster: What Limits Mean for Projects<\/b><\/h3>\n<p><span style=\"font-weight: 400;\">Let&#8217;s say each project has a CPU resource request of 20 cores. This means that the the cluster is maxed out at 6 projects (each project has 20 cores guaranteed to it, so 120 physical cores\/20 cores per project = 6 projects).<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Also, if the <em>MoonBuilder <\/em>project (resource limit of 200 cores) has all of its applications go crazy all at once, then the cluster will have CPU constraint issues because it only has 120 physical cores. <\/span><span style=\"font-weight: 400;\">If this happens, apps will split the CPU according to the <\/span><b>ratio of their CPU resource limits<\/b><span style=\"font-weight: 400;\">. This is another good practice: size your cluster larger than the CPU and Memory request maximums of your largest project.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">If you size clusters this way, and implement CPU resource limits, other applications <\/span><i><span style=\"font-weight: 400;\">and<\/span><\/i><span style=\"font-weight: 400;\"> other projects are more or less kept safe from each other &#8211; and there&#8217;s a <strong>plan<\/strong> for how OpenShift will automatically allocate the physical resources that exist if something goes rogue.<\/span><\/p>\n<h2><b>Non-Production Clusters: Resource Limits and Load Testing<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">Non-Production clusters function just like the example. Typically &#8220;environments,&#8221; like the standard Dev, User, Staging, etc, are reflected with different <em>projects<\/em> in OpenShift. Because of this, non-Production clusters tend to have <\/span><b>many\u00a0more pods<\/b><span style=\"font-weight: 400;\">. This probably means pods will need to have <\/span><b>low default resource requests<\/b><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">However, you can do some neat science via load testing based on pods with CPU resource limits set. You can watch the magic of OpenShift knowing what to do if a pod or project hits its resource limit &#8211; and you\u00a0<em>also<\/em> get some kind of amazing consistency where, for example, if a pod is limited to .75 cores in non-production and can serve 500 requests\/second, you can expect that a pod in Production <\/span><i><span style=\"font-weight: 400;\">also<\/span><\/i><span style=\"font-weight: 400;\"> limited to .75 cores <\/span><i><span style=\"font-weight: 400;\">will\u00a0also serve 500 requests\/second.<\/span><\/i><\/p>\n<p><span style=\"font-weight: 400;\">And as another <em>super nerd cool<\/em>\u00a0side benefit, load testing <\/span><b>should\u00a0<\/b><span style=\"font-weight: 400;\">be able to be done at <\/span><b>any time during the day<\/b><span style=\"font-weight: 400;\"> if you have resource limits set up correctly &#8211; because those resource limits will prevent a single application from using <strong>all<\/strong> resources available to <strong>all<\/strong> applications.<\/span><\/p>\n<h2>Some Advice<\/h2>\n<p>Again, our general advice is to keep capping &#8211; especially the quotas and limit rages &#8211; pretty lax unless you&#8217;re willing to do a lot of tuning and testing. You want to keep badly performing apps from stepping all over each other, not perfectly tune applications for their average load. There are days when your company does a lot of business, and days that programmers are able to deliver faster and more efficiently, and both of those mean that ultimately you want the <em>flexibility <\/em>to use more resources without the slowdown of asking for permission.<\/p>\n<p>We also recommend overcommitting on <em>resource limits<\/em> &#8211;\u00a0 not every app will have its worst day all at once. We recommend this so that you don&#8217;t have to have physical capacity for the cluster as <em>though<\/em> every application will always have its worst day.<\/p>\n<p>Finally, have a nice buffer of physical capacity in your clusters if you aren&#8217;t auto-scaling the cluster. We recommend at least a 25% buffer over average load &#8211; for flexibility, workload growth, and high-transaction-workload days &#8211; and keep an eye on the capacity usage via some kind of capacity planning. OpenShift lets people go\u00a0<em>much faster than they could before<\/em>, and that will be reflected in what can be a surprising rate of growth at first.<\/p>\n<h2>Try it Yourself!<\/h2>\n<p><span style=\"font-weight: 400;\">Thanks to Katacoda, you can play with Kubernetes limits yourself!<\/span><\/p>\n<p><a href=\"https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/quota-api-object\/\"><span style=\"font-weight: 400;\">https:\/\/kubernetes.io\/docs\/tasks\/administer-cluster\/quota-api-object\/<\/span><\/a><\/p>\n<p><a href=\"https:\/\/learn.openshift.com\/playgrounds\/openshift311\/\">https:\/\/learn.openshift.com\/playgrounds\/openshift311\/<\/a><\/p>\n<p><a href=\"https:\/\/labs.play-with-k8s.com\/\"><span style=\"font-weight: 400;\">https:\/\/labs.play-with-k8s.com\/<\/span><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>One of the most crucial metrics of success for an enterprise application platform is if the platform can protect: a) the applications running on it, and b) itself (and its underlying infrastructure). All threats to an application platform eventually come from something within that platform &#8211; an application can be hacked, and then it attacks &hellip; <\/p>\n<p class=\"read-more\"><a class=\"btn btn-default\" href=\"https:\/\/soul-repairs.com\/blog\/2019\/10\/07\/kubernetes-openshift-resource-protection-with-limit-ranges-and-resource-quotas\/\"> Read More<span class=\"screen-reader-text\">  Read More<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"nf_dc_page":"","_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[4],"tags":[47,63,87,44,67,147],"wf_post_folders":[],"coauthors":[26,11],"class_list":["post-2028","post","type-post","status-publish","format-standard","hentry","category-technology","tag-devops","tag-glossary","tag-kubernetes","tag-openshift","tag-ownership","tag-security"],"_links":{"self":[{"href":"https:\/\/soul-repairs.com\/blog\/wp-json\/wp\/v2\/posts\/2028","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/soul-repairs.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/soul-repairs.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/soul-repairs.com\/blog\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/soul-repairs.com\/blog\/wp-json\/wp\/v2\/comments?post=2028"}],"version-history":[{"count":29,"href":"https:\/\/soul-repairs.com\/blog\/wp-json\/wp\/v2\/posts\/2028\/revisions"}],"predecessor-version":[{"id":3114,"href":"https:\/\/soul-repairs.com\/blog\/wp-json\/wp\/v2\/posts\/2028\/revisions\/3114"}],"wp:attachment":[{"href":"https:\/\/soul-repairs.com\/blog\/wp-json\/wp\/v2\/media?parent=2028"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/soul-repairs.com\/blog\/wp-json\/wp\/v2\/categories?post=2028"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/soul-repairs.com\/blog\/wp-json\/wp\/v2\/tags?post=2028"},{"taxonomy":"wf_post_folders","embeddable":true,"href":"https:\/\/soul-repairs.com\/blog\/wp-json\/wp\/v2\/wf_post_folders?post=2028"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/soul-repairs.com\/blog\/wp-json\/wp\/v2\/coauthors?post=2028"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}