docs/reference/filebeat/running-on-cloudfoundry.md
You can use Filebeat on Cloud Foundry to retrieve and ship logs.
% However, version {{version.stack}} of Filebeat has not yet been released, no build is currently available for this version.
To connect to loggregator and receive the logs, Filebeat requires credentials created with UAA. The uaac command creates the required credentials for connecting to loggregator.
uaac client add filebeat --name filebeat --secret changeme --authorized_grant_types client_credentials,refresh_token --authorities doppler.firehose,cloud_controller.admin_read_only
::::{warning}
Use a unique secret: The uaac command shown here is an example. Remember to replace changeme with your secret, and update the filebeat.yml file to use your chosen secret.
::::
You deploy Filebeat as an application with no route.
Cloud Foundry requires that 3 files exist inside of a directory to allow Filebeat to be pushed. The commands below provide the basic steps for getting it up and running.
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-{{version.stack}}-linux-x86_64.tar.gz
tar xzvf filebeat-{{version.stack}}-linux-x86_64.tar.gz
cd filebeat-{{version.stack}}-linux-x86_64
curl -L -O https://raw.githubusercontent.com/elastic/beats/{{ version.stack | M.M }}/deploy/cloudfoundry/filebeat/filebeat.yml
curl -L -O https://raw.githubusercontent.com/elastic/beats/{{ version.stack | M.M }}/deploy/cloudfoundry/filebeat/manifest.yml
You need to modify the filebeat.yml file to set the api_address, client_id and client_secret.
Filebeat comes packaged with various pre-built {{kib}} dashboards that you can use to visualize data in {{kib}}.
If these dashboards are not already loaded into {{kib}}, you must run the Filebeat setup command. To learn how, see Load {{kib}} dashboards.
The setup command does not load the ingest pipelines used to parse log lines. By default, ingest pipelines are set up automatically the first time you run Filebeat and connect to {{es}}.
::::{important} If you are using a different output other than {{es}}, such as {{ls}}, you need to:
::::
To deploy Filebeat to Cloud Foundry, run:
cf push
To check the status, run:
$ cf apps
name requested state instances memory disk urls
filebeat started 1/1 512M 1G
Log events should start flowing to Elasticsearch. The events are annotated with metadata added by the add_cloudfoundry_metadata processor.
A single instance of Filebeat can ship more than a hundred thousand events per minute. If your Cloud Foundry deployment is producing more events than Filebeat can collect and ship, the Firehose will start dropping events, and it will mark Filebeat as a slow consumer. If the problems persist, Filebeat may be disconnected from the Firehose. In such cases, you will need to scale Filebeat to avoid losing events.
The main settings you need to take into account are:
shard_id specified in the cloudfoundry input configuration. The Firehose will divide the events amongst all the Filebeat instances with the same value for this setting. All the instances with the same shard_id should have the same configuration.cf scale or by specifying the number of instances in the manifest.Some basic recommendations to adjust these settings when Filebeat is not able to collect all events:
shard_id.