Log Forwarding

SSH\Web Access log forwarding enables you to forward SSH\Web Access session recordings to your log repository.

Prerequisites

To configure log forwarding, follow the instructions:

SSH Access: Secure Remote Access Bastion

Web Application Access : Zero Trust Web Access

Syslog Configuration

Edit the values.yaml file under audit section:

target_syslog_tag="ssh-audit-export"
target_log_type="syslog"
target_syslog_network="udp"
target_syslog_host="<host>:<port>"
target_syslog_formatter="[default=text]|cef"

Note:
The outputted message format conforms to Syslog format and assumes the Syslog server doesn’t add its own formatting to the message.

Default format: <date > <time> <host name> <log level> <message>.

The variable target_syslog_formatter controls the format of the outputted message either text or cef - for CEF format.

Splunk Configuration

Prerequisites: Splunk HTTP Event Collector

target_log_type="splunk"
target_splunk_sourcetype="<your_sourcetype>"
target_splunk_source="<your_source>"
target_splunk_index="<your_index>"
target_splunk_token="<your_token>"
target_splunk_url="<your_splunk_host_address>"

ELK / Logstash Configuration

target_log_type="logstash"
target_logstash_dns="localhost:8911"
target_logstash_protocol="tcp"

Configure your Logstash to use the same port and protocol:
Add the following to the logstash.conf file:
input { tcp { port => 8911 codec => json } }

ELK Elasticsearch Configuration

target_log_type="elasticSearch"

Elasticsearch server - requires one of the following:
  target_elasticsearch_server_type="elastic-server-nodes"
  target_elasticsearch_nodes="https://host1:9200,https://host2:9200"
or
  target_elasticsearch_server_type="elastic-server-cloudId"
  target_elasticsearch_cloud_id="<your_cloudId>"

Elasticsearch authentication - requires one of the following:
  target_elasticsearch_auth_type="elastic-auth-apiKey"
  target_elasticsearch_api_key="<your_apiKey>"
or
  target_elasticsearch_auth_type="elastic-auth-usrPwd"
  target_elasticsearch_user_name="<your_user>"
  target_elasticsearch_password="<your_pwd>"

target_elasticsearch_index="<your_index>" (required)

Logz.io Configuration

target_log_type="logz_io"
target_logz_io_token="<TOKEN>"

target_logz_io_protocol="tcp"
OR
target_logz_io_protocol="https"

For details about log tokens, see here.

AWS S3 Configuration

🚧

Note:

Logs will be uploaded to your S3 bucket based on 10 minutes intervals. Keep in mind that in case your pod will scale down or restart, logs that were not uploaded to your bucket will be lost.

target_s3_folder_prefix=""  # default value "akeyless-log"
target_s3_bucket_name=""
target_s3_aws_access_id=""
target_s3_aws_access_key=""
target_s3_aws_region=""

Azure Log Analytics

Logs will be sent to a given workspace according to provided ID.

azure_workspace_id=""
azure_workspace_key="" # can be "Primary key" or "Secondary key"

STDOUT Configuration

Setting log forwarding to stdout:

env:
  - name: LOG_FORWARDING_STDOUT
    value: "true"

DataDog

Setting log forwarding to DataDog system:

target_log_type="datadog"
target_datadog_host="<datadog host e.g. datadoghq.com>" (required)
target_datadog_api_key="<datadog api key>"(required)
target_datadog_log_source="<The integration name associated with your log>" (optional. Default value: akeyless)
target_datadog_log_tags="<Tags associated with your logs in the form of key:val,key:val... e.g. env:test,version:1>"(optional)
target_datadog_log_service="<The name of the application or service generating the log events>"(optional. Default value: akeyless-gateway)

Did this page help you?