Because clustered indexes store the underlying data in the sort order that is where the problem comes in. So if you had sequential IDs such as 1, 2, 3, 4, 5 that makes it really easy and low effort to just apply the next row to the end of the index. If you instead use random data the database management system has to keep reorganizing the data to keep the sort order that has no rhyme or reason.
So if I am understanding what you are saying correctly and the clustering you are referring to is the database being clustered and determining which server that a row gets saved to is determined by its ID you do get some benefit that the rows will be rather evenly distributed. However, you still run into the same issue on any particular server where its local index has to do additional work to keep things sorted. (although in practice you may have some benefit that each server is only in charge or some of the address space)