I am trying to build a data pipeline for a data engineering project With the help of S3, Glue, Athena, etc., I am stuck when setting up glue crawler for indexing over data. Even I set up the role according to the need, but still it's giving me the following error.
{"service":"AWSGlue","statusCode":400,"errorCode":"AccessDeniedException","requestId":"7bd42729-
bc4b-4e22-af2a-553860002c64","errorMessage":"Account 834025784276 is denied
access.","type":"AwsServiceError"}


I know this is an old question, but wanted to provide a feedback on our experience with that issue, as we didn't find much on the internet.
So we started experiencing this issue at a time that seems kind of random to us, as we were using the service for 3 months, the account having been created more than 2 years ago.
We did reach out to AWS support, which conducted several steps in investigating that issue:
Funny enough, we had an issue with another service (CodePipeline, which could not start a new CodeBuild anymore) which led to a similar solution and us finding this explanation of the containment score: https://towardsaws.com/containment-score-of-aws-3a893231e948
As a TLDR; in case link goes down:
Author had an issue with not being able to start a CodeBuild build saying "Cannot have more than 0 builds in queue for the account"
Support engineer mentioned the "containment score":
So it seems that AWS evaluates for each account and depending on their use a "containment score" allowing to set limits of use of services and maybe avoid scaling up crazy in case of a hack or something. I have no idea if this score is "per service" or "global" (probably "per service")
So if you encounter this, make sure to mention this to the support engineer as it could help resolving the issue. For us, the solution took ~5 days to be found, which is a crazy delay if you're thinking about a production environment blocking issue.
Hope this helps!