Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How-to add nodeSelector restriction to components? #181

Open
sdurrheimer opened this issue Dec 2, 2020 · 9 comments
Open

How-to add nodeSelector restriction to components? #181

sdurrheimer opened this issue Dec 2, 2020 · 9 comments
Labels
help wanted Extra attention is needed question Further information is requested

Comments

@sdurrheimer
Copy link

sdurrheimer commented Dec 2, 2020

Hi there,

Before the recent refactor, I was able to add by nodeSelector restriction, like the following example:

local s = 
  t.store + 
  t.store.withVolumeClaimTemplate + 
  t.store.withServiceMonitor + 
  commonConfig + {
    config+:: {
      name: 'thanos-store',
      replicas: 1,
    },
    statefulSet+: {
      spec+: {
        template+: {
          spec+: {
            nodeSelector+: {
              'k8s.scaleway.com/pool-name': 'kubernetes-infra',
            },
          },
        },
      },
    },
  };

With the new format, how can I achieve the same behavior?

I tried the following:

local s = t.store(commonConfig {
  replicas: 1,
  serviceMonitor: true,
  statefulSet+: {
    spec+: {
      template+: {
        spec+: {
          nodeSelector+: {
            'k8s.scaleway.com/pool-name': 'kubernetes-infra',
          },
        },
      },
    },
  },
});

Thanks in advance,

@sdurrheimer
Copy link
Author

sdurrheimer commented Dec 3, 2020

Okay looks like the current way to do this is:

local s = t.store(commonConfig {
  replicas: 1,
  serviceMonitor: true,
}) + {
  statefulSet+: {
    spec+: {
      template+: {
        spec+: {
          nodeSelector+: {
            'k8s.scaleway.com/pool-name': 'kubernetes-infra',
          },
        },
      },
    },
  },
};

but having a config field for that would be prettier.

@Blizter
Copy link

Blizter commented Dec 10, 2020

I do agree with you, it will make resource selection, and even resource claims DRY

@metalmatze
Copy link
Contributor

This is exactly the right way to do it. 👍
What you came up with is the way things are supposed to be extended by people on their own. Having the Kubernetes specifics in the config object isn't supposed to happen for the most part.
Again, this is exactly what I would have had in mind for this too!

@betorvs
Copy link

betorvs commented Feb 25, 2021

And about to add node selector in store shards. How do we do this?

local strs = t.storeShards(commonConfig {
  shards: 3,
  replicas: 3,
  serviceMonitor: true,
  bucketCache: {
    type: 'memcached',
    config+: {
      // NOTICE: <MEMCACHED_SERCIVE> is a placeholder to generate examples.
      // List of memcached addresses, that will get resolved with the DNS service discovery provider.
      // For DNS service discovery reference https://thanos.io/service-discovery.md/#dns-service-discovery
      addresses: ['memcached.%s.svc.cluster.local:11211' % commonConfig.namespace],
    },
  },
  indexCache: {
    type: 'memcached',
    config+: {
      // NOTICE: <MEMCACHED_SERCIVE> is a placeholder to generate examples.
      // List of memcached addresses, that will get resolved with the DNS service discovery provider.
      // For DNS service discovery reference https://thanos.io/service-discovery.md/#dns-service-discovery
      addresses: ['memcached.%s.svc.cluster.local:11211' % commonConfig.namespace],
    },
  },
}) + {
  statefulSet+: {
    spec+: {
      template+: {
        spec+: {
          nodeSelector+: {
            'k8s.scaleway.com/pool-name': 'kubernetes-infra',
          },
        },
      },
    },
  },
};

This is not adding it.

@sdurrheimer
Copy link
Author

@betorvs By looking here, we can see that the store shards need an additional shards {} enclosure.

You can try:

}) + {
  shards+: {
    statefulSet+: {
      spec+: {
        template+: {
          spec+: {
            nodeSelector+: {
              '<label>': '<label-value>',
            },
          },
        },
      },
    },
  },
};

And maybe even one more as it's looping over the number of shards.

@betorvs
Copy link

betorvs commented Feb 26, 2021

@sdurrheimer thanks for replying it. But when I added it:

RUNTIME ERROR: Field does not exist: service
        local.jsonnet:262:21-47 thunk from <object <anonymous>>
        local.jsonnet:259:52-59
        <std>:715:15-22 thunk <val> from <function <format_codes_arr>>
        <std>:722:27-30 thunk from <thunk <s> from <function <format_codes_arr>>>
        <std>:592:22-25 thunk from <function <format_code>>
        <std>:592:9-26  function <format_code>
        <std>:722:15-60 thunk <s> from <function <format_codes_arr>>
        <std>:727:24-25 thunk from <thunk <s_padded> from <function <format_codes_arr>>>
        <std>:480:30-33 thunk from <thunk from <function <pad_left>>>
        <std>:480:19-34 thunk from <function <pad_left>>
        ...
        vendor/github.com/thanos-io/kube-thanos/jsonnet/kube-thanos/kube-thanos-query.libsonnet:92:11-31
        Array element 11
        Field "args"
        Array element 0
        Field "containers"
        Field "spec"
        Field "template"
        Field "spec"
        Field "thanos-query-deployment"
        During manifestation

I found another way to add it. I add a new variable:

local test = {
    spec+: {
      template+: {
        spec+: {
          nodeSelector+: {
            'k8s.scaleway.com/pool-name': 'kubernetes-infra',
          },
        },
      },
    }
};

Then I changed shards for:

{
  ['store-' + shard + '-' + name]: strs.shards[shard][name] + test
  for shard in std.objectFields(strs.shards) 
  for name in std.objectFields(strs.shards[shard]) 
  if strs.shards[shard][name] != null
} +

It works. I'm not sure if this is the best solution. I'm pretty new using jsonnet.

@betorvs
Copy link

betorvs commented Mar 2, 2021

no. it didn't work. It creates nodeSelector inside services too.

@kakkoyun kakkoyun added help wanted Extra attention is needed question Further information is requested labels Mar 5, 2021
@dschaaff
Copy link
Contributor

Curious about this as well. I'm unable to get a nodeSelector working with sharded thanos store. I'm not super familiar with jsonnet either. Following the example here #181 (comment) gives me the error

RUNTIME ERROR: field does not exist: service
        thanos.jsonnet:134:21-47        thunk <service>
        thanos.jsonnet:133:52-59        thunk <array_element>
        std.jsonnet:708:15-22   thunk <val>
        std.jsonnet:715:27-30   thunk <val>
        std.jsonnet:585:22-25   thunk <a>
        std.jsonnet:36:17
        std.jsonnet:36:8-19     thunk <a>
        std.jsonnet:36:8-31     function <anonymous>
        std.jsonnet:36:8-31     function <anonymous>
        std.jsonnet:585:9-26    function <format_code>
        ...
        std.jsonnet:237:7-23    function <anonymous>
        vendor/kube-thanos/kube-thanos-query.libsonnet:110:11-31        thunk <array_element>
        vendor/kube-thanos/kube-thanos-query.libsonnet:(100:9)-(142:10) object <c>
        vendor/kube-thanos/kube-thanos-query.libsonnet:188:26   thunk <array_element>
        vendor/kube-thanos/kube-thanos-query.libsonnet:188:25-28        object <anonymous>
        thanos.jsonnet:(139:18)-(145:8) object <anonymous>
        thanos.jsonnet:(138:12)-(146:6) object <anonymous>
        thanos.jsonnet:(137:16)-(147:4) object <anonymous>
        thanos.jsonnet:208:1-70 object <anonymous>
        During manifestation

@dschaaff
Copy link
Contributor

dschaaff commented Aug 31, 2022

I figured this out FYI

local storeShardCount = 9;
local strs = t.storeShards(commonConfig.config {
  replicas: 2,
  shards: storeShardCount,
  serviceMonitor: true,
  resources: {
    requests: {
      cpu: '6000m',
      memory: '18G',
    },
    limits: {
      memory: '20G',
    },
  },
  bucketCache: {
    type: 'memcached',
    config+: {
      addresses: ['dnssrv+_memcache._tcp.thanos-store-memcached.monitoring.svc.cluster.local'],
      max_item_size: '16MiB',
      max_async_buffer_size: 100000,
      max_get_multi_batch_size: 1000,
      max_idle_connections: 750,
      timeout: '750ms',
    },
  },
  indexCache: {
    type: 'memcached',
    config+: {
      addresses: ['dnssrv+_memcache._tcp.thanos-store-memcached.monitoring.svc.cluster.local'],
      max_item_size: '16MiB',
      max_async_buffer_size: 100000,
      max_get_multi_batch_size: 1000,
      max_idle_connections: 750,
      timeout: '750ms',
    },
  },
}) + {
  shards+: {
    ['shard%d' % i ]+: {
     statefulSet+: {
        spec+: {
          template+: {
            spec+: {
              nodeSelector: {
                'kubernetes.io/arch': 'amd64',
              },
            },
          },
        },
      },
    },
    for i in std.range(0, storeShardCount - 1 )
  },
};

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed question Further information is requested
Projects
None yet
Development

No branches or pull requests

6 participants