Log Forwarding Configuration
Standalone Gateway
Log Forwarding
To forward all your Akeyless logs directly from your Gateway, create a local config file with the relevant configuration for your target logs server as described below.
To start your Gateway with this setting, please mount a local config file directly into /root/.akeyless/logand.conf
the Gateway docker image.
docker run -d -p 8000:8000 -p 8200:8200 -p 18888:18888 -p 8080:8080 -v {path-to}/log_forwarding_conf_file:/root/.akeyless/logand.conf -e ADMIN_ACCESS_ID="p-xxxxxxx" -e ADMIN_ACCESS_KEY="<YourAccessKey" --name akeyless-gw akeyless/base
Alternatively, you can forward your logs using an environment variable:
docker run -d -p 8000:8000 -p 8200:8200 -p 18888:18888 -p 8080:8080 -e LOG_FORWARDING='enable="true"\ntarget_syslog_tag="ssh-audit-export"\ntarget_log_type="syslog"\ntarget_syslog_network="udp"\ntarget_syslog_host="my-syslog:514"\ntarget_syslog_formatter="text"' -e ADMIN_ACCESS_ID="p-xxxxxxx" -e ADMIN_ACCESS_KEY="<YourAccessKey" --name akeyless-gw akeyless/base
Syslog
Set the following settings inside your local config file:
enable="true"
target_syslog_tag="ssh-audit-export"
target_log_type="syslog"
target_syslog_network="udp"
target_syslog_host="<host>:<port>"
target_syslog_formatter="[default=text]|cef"
Note:
The outputted message format conforms to Syslog format and assumes the Syslog server doesn’t add its own formatting to the message.
Default format: <date > <time> <host name> <log level> <message>
.
The variable target_syslog_formatter
controls the format of the outputted message either text
or cef
- for CEF format.
Splunk
Prerequisites: Splunk HTTP Event Collector
enable="true"
target_log_type="splunk"
target_splunk_sourcetype="<your_sourcetype>"
target_splunk_source="<your_source>"
target_splunk_index="<your_index>"
target_splunk_token="<your_token>"
target_splunk_url="<your_splunk_host_address>"
ELK / Logstash
enable="true"
target_log_type="logstash"
target_logstash_dns="localhost:8911"
target_logstash_protocol="tcp"
Configure your Logstash to use the same port and protocol:
Add the following to the logstash.conf
file:
input { tcp { port => 8911 codec => json } }
ELK Elasticsearch
enable="true"
target_log_type="elasticSearch"
"Elasticsearch server - requires one of the following:"
target_elasticsearch_server_type="elastic-server-nodes"
target_elasticsearch_nodes="https://host1:9200,https://host2:9200"
# OR
target_elasticsearch_server_type="elastic-server-cloudId"
target_elasticsearch_cloud_id="<your_cloudId>"
"Elasticsearch authentication - requires one of the following:"
target_elasticsearch_auth_type="elastic-auth-apiKey"
target_elasticsearch_api_key="<your_apiKey>"
# OR
target_elasticsearch_auth_type="elastic-auth-usrPwd"
target_elasticsearch_user_name="<your_user>"
target_elasticsearch_password="<your_pwd>"
target_elasticsearch_index="<your_index>" (required)
Logz.io
enable="true"
target_log_type="logz_io"
target_logz_io_token="<TOKEN>"
target_logz_io_protocol="tcp"
# OR
target_logz_io_protocol="https"
For details about log tokens, see here.
AWS S3
Note:
Logs will be uploaded to your S3 bucket based on 10 minutes intervals. Keep in mind that in case your container scales down or restarts, logs that were not uploaded to your bucket will be lost.
enable="true"
target_log_type="aws_s3"
target_s3_folder_prefix="" # default value "akeyless-log"
target_s3_bucket_name=""
target_s3_aws_access_id=""
target_s3_aws_access_key=""
target_s3_aws_region=""
Azure Log Analytics
Logs will be sent to a given workspace according to provided ID.
enable="true"
target_log_type="azure_log_analytics"
azure_workspace_id=""
azure_workspace_key="" # can be "Primary key" or "Secondary key"
STDOUT
Setting log forwarding to stdout:
enable="true"
target_log_type="std_out"
DataDog
Setting log forwarding to DataDog system:
enable="true"
target_log_type="datadog"
target_datadog_host="<datadog host e.g. datadoghq.com>" (required)
target_datadog_api_key="<datadog api key>"(required)
target_datadog_log_source="<The integration name associated with your log>" (optional. Default value: akeyless)
target_datadog_log_tags="<Tags associated with your logs in the form of key:val,key:val... e.g. env:test,version:1>"(optional)
target_datadog_log_service="<The name of the application or service generating the log events>"(optional. Default value: akeyless-gateway)
Updated 2 months ago