I am getting 503 after exactly 30 sec while exporting all user data from react app.
export const get = (
url: string,
queryParams: Object = {},
extraHeaders: Object = {},
responseType: string = 'text',
callback?: number => void
): Promise<*> =>
superagent
.get(url)
.timeout({
response: 500000,
deadline: 600000
})
.use(noCache)
.set('Accept-Language', (i18n.language || 'en').split('_')[0])
.set(extraHeaders)
.responseType(responseType)
.query(queryParams)
.on('progress', e => {
if (callback) {
callback(e.percent)
}
})
Technology Stack used: Akka http (backend), react js (front end), Nginx (docker image). i have tried to access akka api directly with curl command request executed in 2.1min successfully data exported in .csv file.
Curl command : curl --request GET --header "Content-Type: text/csv(UTF-8)" "http://${HOST}/engine/export/details/31a0686a-21c6-4776-a380-99f61628b074?dataset=${DATASET_ID}" > export_data.csv
NOTE: on my local env. i am able to export all records from react UI in 2.5 min. but this issue is coming on TEST site. and test site is setup with docker env. images for this application.
Error At Browser Console:
GET http://{HOST}/engine/export/details/f4078a63-85bc-43ac-b9a9-c58f6c8193da?dataset=mexico 503 (Service Unavailable)
Uncaught (in promise) Error: Service Unavailable
at Request.<anonymous> (vendor.js:1)
at Request.Emitter.emit (vendor.js:1)
at XMLHttpRequest.t.onreadystatechange (vendor.js:1)
this is coming on PRODUCTION and TEST site. then only difference in local and test site is docker images.
Could you please help me for the same?
Thank you in advance.
On your local machine you have plenty of resources. On your remote host, responding with 503, you have exceeded capacity in one of four resource types:
These are ordered by least expensive to most expensive. Both Disk and Network are typically off-bus, with network orders of magnitude slower than any other access type.
On your local machine I am guessing you have exclusive access, so locked resources that need cleanup are a non-issue. You also have arbitrated (non exclusive) access to the environment when requests are concurrent with others. It could be something as simple as running out of file handles/file descriptors to satisfy your query because the back end hosts do not clean up orphaned connections fast enough.
If you have nailed down all of the differences between your two configurations and there are no differences (just local vs remote), then you are left with the resource problem of other users on the system