I am running into issues when trying to use the kubernetes dashboard. Searching for my pod name rarely finds anything even though the pod exists.
Looking at HTTP requests in the browser, the dashboard is sending tons of HTTP requests, many are failing.
The issue happens in multiple clusters, is there something wrong with kubernetes?

Known issue with the kubernetes dashboard that is poorly made software, see the many tickets on GitHub:
A search in the dashboard is searching through all resources in the cluster. It can take some time for a real cluster with thousands of pods and objects.
The dashboard is automatically refreshing all resources every 5 seconds by default.
And that's where things break. The dashboard is resending a batch of heavy requests every 5 seconds, aborting any last requests that did not complete fast enough. Most requests are never able to complete. (Note that web servers typically continue to process a request when a client is suddenly disconnected).
This is a fantastic example of why you never set short timeouts (5 seconds) and auto-retries. It's guaranteed to cause catastrophic cascading failure in production, as soon as there's a bit of usage and resource pressure. There's a chapter about that in the SRE book from Google, if only they read their own book https://sre.google/sre-book/addressing-cascading-failures/
Some tuning to help:
To mitigate the issue for yourself, adjust your personal settings in the dashboard UI:
Settings->Resource auto-refresh time interval->0 seconds(disabled)The kubernetes admin has to disable auto refresh for all users in the global settings, see Helm settings for example
resourceAutoRefreshTimeInterval.Do not set CPU limits on the dashboard container. The dashboard is sending batches of HTTP requests, they will compete for resources and timeout when facing CPU limits. (If you really want to set CPU limits, look at throttle statistics and adjust limits accordingly, for me I am seeing throttling up to 10 CPUs).
Raise or do not set memory limits. In the GitHub issues linked above, Kubernetes developers say 2 GB is fine for a small cluster and increase to 8 GB or more for a large cluster. Users are reporting OOM (out of memory crash) with as much as 24 GB of memory with the default 5 second refresh interval (OOM resolved after disabling refresh). Make sure you've disabled auto refresh and be generous with memory. You can look at memory usage of the container and adjust accordingly,