documentation/topics/actions/create-actions.md
Create actions are used to create new records in the data layer. For example:
# on a ticket resource
create :open do
accept [:title]
change set_attribute(:status, :open)
end
Here we have a create action called :open that allows setting the title, and sets the status to :open. It could be called like so:
Ticket
|> Ash.Changeset.for_create(:open, %{title: "Need help!"})
|> Ash.create!()
For a full list of all of the available options for configuring create actions, see the Ash.Resource.Dsl documentation.
You can use atomic_set/3 to set attribute values using database expressions during create actions. This is useful for values that should be computed by the database rather than in application code, such as timestamps or UUIDs.
create :create do
accept [:title]
change atomic_set(:created_at, expr(now()))
change atomic_set(:uuid, expr(fragment("gen_random_uuid()")))
end
Using
atomic_ref/1in creates {: .info}You can use
atomic_ref/1in create actions. It returnsnilor the latest value that the attribute is being changed to. Note thatatomic_setcannot reference related fields, useexistswith relationships, or reference aggregates that involve relationships, since these require a persisted record.
For update actions, atomic_set/3 behaves identically to atomic_update/3.
See the Code Interface guide for creating an interface to call the action more elegantly, like so:
Support.open_ticket!("Need help!")
Bulk creates take a list or stream of inputs for a given action, and batches calls to the underlying data layer.
Given our example above, you could call Ash.bulk_create like so:
Ash.bulk_create([%{title: "Foo"}, %{title: "Bar"}], Ticket, :open)
Check the docs! {: .warning}
Make sure to thoroughly read and understand the documentation in
Ash.bulk_create/4before using. Read each option and note the default values. By default, bulk creates don't return records or errors, and don't emit notifications.
Generally speaking, all regular Ash create actions are compatible (or can be made to be compatible) with bulk create actions. However, there are some important considerations.
Ash.Resource.Change modules can be optimized for bulk actions by implementing batch_change/3, before_batch/3 and after_batch/3. If you implement batch_change/3, the change function will no longer be called, and you should swap any behavior implemented with before_action and after_action hooks to logic in the before_batch and after_batch callbacks.
Actions that reference arguments in changes, i.e change set_attribute(:attr, ^arg(:arg)) will prevent us from using the batch_change/3 behavior. This is usually not a problem, for instance that change is lightweight and would not benefit from being optimized with batch_change/3
If your action uses after_action hooks, or has after_batch/3 logic defined for any of its changes, then we must ask the data layer to return the records it inserted. Again, this is not generally a problem because we throw away the results of each batch by default. If you are using return_records?: true then you are already requesting all of the results anyway.
Returning a stream allows you to work with a bulk action as an Elixir Stream. For example:
input_stream()
|> Ash.bulk_create(Resource, :action, return_stream?: true, return_records?: true)
|> Stream.map(fn
{:ok, result} ->
# process results
{:error, error} ->
# process errors
end)
|> Enum.reduce(%{}, fn
{:ok, result}, acc ->
# process results
{:error, error}, acc ->
# process errors
end)
Be careful with streams {: .warning}
Because streams are lazily evaluated, if you were to do something like this:
elixir[input1, input2, ...] # has 300 things in it |> Ash.bulk_create( Resource, :action, return_stream?: true, return_records?: true, batch_size: 100 # default is 100 ) |> Enum.take(150) # stream has 300, but we only take 150What would happen is that we would insert 200 records. The stream would end after we process the first two batches of 100. Be sure you aren't using things like
Stream.takeorEnum.taketo limit the amount of things pulled from the stream, unless you actually want to limit the number of records created.
Upserting is the process of "creating or updating" a record, modeled with a single simple create. Both bulk creates and regular creates support upserts. Upserts can be declared in the action, like so:
create :create_user do
accept [:email]
upsert? true
upsert_identity :unique_email
end
Or they can be done with options when calling the create action.
Ash.create!(changeset, upsert?: true, upsert_identity: :unique_email)
Upserts do not use an update action {: .warning}
While an upsert is conceptually a "create or update" operation, it does not result in an update action being called. The data layer contains the upsert implementation. This means that if you have things like global changes that are only run on update, they will not be run on upserts that result in an update. Additionally, notifications for updates will not be emitted from upserts. Most importantly, there are no read or update policies applied! You must take care that an upsert can only target records that the user has permission to update.
Lets imagine that you want a user to upsert an article by its slug, but only if it is their article:
If your action looked like this:
create :upsert_article_by_slug do
upsert? true
accept [:slug, :title, :body]
upsert_identity :unique_slug
end
And one way it could be called is like so:
Article
|> Ash.Changeset.for_create(
:upsert_article_by_slug,
%{slug: "foo", title: "new title", slug: "new slug"},
actor: current_user
)
|> Ash.create!()
This would create an article, unless there is an article with a matching slug in which case it would update the title and the body to match the provided input. Let's add the "only if it is their article" functionality.
For this we use a filter change to further scope the upsert:
create :upsert_article_by_slug do
upsert? true
accept [:slug, :title, :body]
upsert_identity :unique_slug
upsert_condition expr(user_id == ^actor(:id))
end
What is
^actor(:id)? {: .info}Many places in Ash that support expression support templates. These are ways to refer to certain things that are commonly available, like the actor, or action argument values.
For more information, see the expressions guide
Now, when we perform this upsert, there are three possible outcomes:
slug, in which case the article is createdslug, and the user_id matches the provided actor's id, so
it is updated with the new title and body.slug, and the user_id does not match the provided actor's,
id, in which case the action results in a Ash.Error.Changes.StaleRecord error. This is
the same error that would occur if the actor attempted to update something that had changed
in some unexpected way in the database.Improving the stale record error {: .info}
You may wish to transform this into an error message that can be displayed to the user, using the
d:actions.create.error_handleroption. For example:elixircreate :upsert_article_by_slug do upsert? true accept [:slug, :title, :body] upsert_identity :unique_slug upsert_condition expr(user_id == ^actor(:id)) error_handler fn _changeset, %Ash.Error.Changes.StaleRecord{} -> Ash.Error.Changes.InvalidChanges.exception(field: :slug, message: "has already been taken")" _ changeset, other -> # leave other errors untouched other end end
Upserts support both atomic_update/3 and atomic_set/3. The key difference is when each applies:
atomic_update/3 - Only applies during the UPDATE phase (when the record already exists). Does not affect the INSERT.atomic_set/3 - Applies during the INSERT phase (when creating). For upserts, if you want the value set for both cases, you can use both together.For example:
create :create_game do
accept [:identifier]
upsert? true
upsert_identity :identifier
change set_attribute(:score, 0)
change atomic_update(:score, expr(score + 1))
end
This will result in creating a game with a score of 0, and if the game already exists, it will increment the score by 1.
If you want database-generated values for both insert and update:
create :upsert_with_timestamps do
accept [:identifier]
upsert? true
upsert_identity :identifier
# Set created_at only on insert
change atomic_set(:created_at, expr(now()))
# Update updated_at on both insert and update
change set_attribute(:updated_at, &DateTime.utc_now/0)
# Or use atomic_update for update-only behavior
change atomic_update(:access_count, expr(access_count + 1))
end
For information on options configured in the action, see d:Ash.Resource.Dsl.actions.create.
For information on options when calling the action, see Ash.create/2.
All actions are run in a transaction if the data layer supports it. You can opt out of this behavior by supplying transaction?: false when creating the action. When an action is being run in a transaction, all steps inside of it are serialized because transactions cannot be split across processes.
destination_attribute of the relationship.before_transaction and around_transaction hooks are called (Ash.Changeset.before_transaction/2). Keep in mind, any validations that are marked as before_action? true (or all global validations if your action has delay_global_validations? true) will not have happened at this point.before_action hooks are performed in orderafter_action hooks are performed in orderafter_transaction hooks are invoked with the result of the transaction (even if it was an error)