GitLab CI: `$CI_JOB_LOG` pass job log to another job

246 Views Asked by At

According to documentation there is $CI_JOB_STATUS env var that can be used as

The status of the job as each runner stage is executed. Use with after_script. Can be success, failed, or canceled.

I am using it as

stages:
  - build
  - test
  - publish

.artifacts-template: &artifacts
  artifacts:
    expire_in: 2 weeks
    when: always
    paths:
      - bus

.test-template: &test
  stage: test
  needs: ["build"]
  allow_failure: true
  after_script:
    - echo $CI_JOB_STATUS > bus/$CI_JOB_NAME/status
  <<: *artifacts

#...

And that works really well. My test jobs dump their statuses into artifacts within after_script step. Then publish job publishes those statuses into the website.


I would like to extend this workflow to include logs from those test jobs as well.

Therefore, in my after_script step, I would like to use something like

echo $CI_JOB_LOG > bus/$CI_JOB_NAME/log

or

cp $CI_JOB_LOG_PATH bus/$CI_JOB_NAME/log

What's the proper way to acquire logs of a job, so it can be dumped into file and re-used in the next stage job in the same pipeline?

2

There are 2 best solutions below

0
On

To achieve the task of passing the job log to another job in GitLab CI, you will need to use a workaround: I do not think there is not a direct environment variable like $CI_JOB_LOG that holds the job's log.

Instead, try in the script section of your job to redirect the output of your commands to a file. You will need to do this for each command whose output you want to capture.
Define this log file as an artifact, so it can be passed to subsequent jobs. In the subsequent jobs, you can access this log file as it is passed along as an artifact.

Your .gitlab-ci.yml would then be:

stages:
  - build
  - test
  - publish

.artifacts-template: &artifacts
  artifacts:
    expire_in: 2 weeks
    when: always
    paths:
      - bus

.capture-log: &capture_log
  script:
    - command1 | tee -a job.log # Replace 'command1' with your actual command
    - command2 | tee -a job.log # Same for 'command2', and so on
    - ...
    - mkdir -p bus/$CI_JOB_NAME
    - echo $CI_JOB_STATUS > bus/$CI_JOB_NAME/status
    - mv job.log bus/$CI_JOB_NAME/log
  artifacts:
    paths:
      - bus/

.test-template: &test
  stage: test
  needs: ["build"]
  allow_failure: true
  <<: *capture_log
  <<: *artifacts

# Define your jobs using templates

The script: section is modified to include commands that capture the output and append it to job.log. And the job.log is then moved to the bus/$CI_JOB_NAME directory along with the job status file.
The artifacts: section makes sure both the status and log files are available to subsequent jobs.

Note: that approach does not capture system-generated logs (like those produced automatically by GitLab CI). Capturing these would require a feature that GitLab CI does not currently support natively. However, for most use cases, capturing the output of your script commands could be enough.

0
On

I found a workaround, definitely more elegant than solution proposed by Vonc.

I am dumping job ID with echo $CI_JOB_ID > bus/$CI_JOB_NAME/id

.test-template: &test
  stage: test
  needs: ["build"]
  allow_failure: true
  after_script:
    - echo $CI_JOB_STATUS > bus/$CI_JOB_NAME/status
    - echo $CI_JOB_ID > bus/$CI_JOB_NAME/id
  <<: *artifacts

and then in publish stage I use stored job ID to download raw logs via standard public url

curl -L "https://gitlab.com/namespace/project/-/jobs/${ID}/raw" -o log

As long as publish stage "needs" the previous test stage then all logs should be already available for download.

Please comment if you think that it is not safe.