So from the consumer point of view a consumer will subscribe to a particular topic(s) and notify the broker of which consumer groups it would like to be a part of. The broker then gets the rebalance going through the group coordinator. (The following process is described in this article)
If you are using a partition assignor that comes bundled with Kafka you will end up with a roughly an equivalent number of partitions assigned to each consumer in the group. Using the information detailed in this article you could choose to implement your own custom partition assignor that was tenant aware.
Assuming you want all messages that reference a particular tenant to go to the same consumer you could implement that. Assuming you know what Partitioner your producers are using (if you don't specify your own it will use the default partitioner), you could run each key tenant1-1, tenant1-2, .... tenant2-1, tenant2-2, ... through the partitioner and assign all partitions that a specific tenant will send messages on to the same consumer.
There are a number of issues with this.
1. With the default partitioner a hash is run over the bytes of the key of the message. Hashes have collisions. So, for example, potentially a key of "tenant1-1" could be hashed to the same value as "tenant5-3". Assuming the hash is perfectly opaque (which it aims to be) that means there is a 1/number of partitions chance there will be a match. This is why in my previous reply I mentioned that you at best would get up to 100 partitions of usage with that example but it may be less partitions than that.
You can write your own partitioner if you wanted that could give you fine grained control over what partition a particular message shows up on. Perhaps the partitioner would deserialize the key, retrieve the tenant ID, subtract one, multiple that by 10, add the subvalue, and then subtract all that by one:
Example:
tenant1-1
1 - 1 = 0
0 * 10 = 0
0 + 1 = 1
1 -1 = 0
That one would be assigned to partition 0
tenant3-5
3 - 1 = 2
2 * 10 = 20
20 + 5 = 25
25 - 1 = 24
This one would be assigned to partition 24.
Putting all these things together you could get extremely tight control of where messages are sent and where they end up. With that control you get an extreme level of rigidity.
For example: You started with 10 tenants now another one comes along. You know have 11 tenants. You now need to repartition the topic.
For total segregation you will need to have exactly the number of consumers to match the number of tenants. If a particular tenant starts overwhelming a topic you won't be able to effective split it and preserve the segregation. This is just the tip of the iceberg as far as complications are concerned.
If you use case can support it it is a much better idea to stay within the default setup. If you must key off of tenant id then use that but think about how many partitions you will need to support. There is an art in setting up your Kafka topics to support all of this. The more you can treat the internals as a black box and Kafka as a dumb pipe the better.