Title: | Characterize relational contracts in repated or stochastic games |
---|---|
Description: | Characterize relational contracts in repated or stochastic games. Can also analyse repeated negotiation equilibria. |
Authors: | Sebastian Kranz |
Maintainer: | Sebastian Kranz <[email protected]> |
License: | GPL >= 2.0 |
Version: | 0.2.0 |
Built: | 2024-11-21 03:20:25 UTC |
Source: | https://github.com/skranz/RelationalContracts |
Use ggplotly to show an animation of the payoff sets of a capped RNE going from t=T to t=1
animate_capped_rne_history( g, x = g$sdf$x[1], hist = g$eq.history, colors = c("#377EB8", "#E41A1C", "#4DAF4A", "#984EA3", "#FF7F00", "#FFFF33", "#A65628", "#F781BF"), alpha = 0.4, add.state.label = TRUE, add.grid = FALSE, add.diag = FALSE, add.plot = NULL, eq.li = NULL )
animate_capped_rne_history( g, x = g$sdf$x[1], hist = g$eq.history, colors = c("#377EB8", "#E41A1C", "#4DAF4A", "#984EA3", "#FF7F00", "#FFFF33", "#A65628", "#F781BF"), alpha = 0.4, add.state.label = TRUE, add.grid = FALSE, add.diag = FALSE, add.plot = NULL, eq.li = NULL )
Use ggplotly to show an animation of the payoff sets of a list of equilibria
animate_eq_li(g, eq.li, x = g$sdf$x[1], ...)
animate_eq_li(g, eq.li, x = g$sdf$x[1], ...)
Helper function to find differences between two equilibria
compare_eq(eq1, eq2 = g[["eq"]], g, verbose = TRUE)
compare_eq(eq1, eq2 = g[["eq"]], g, verbose = TRUE)
Take a look at the computed transitions for each state using separate data frames
diagnose_transitions(g)
diagnose_transitions(g)
Often it is useful to specify games such that players don't move simultaneously but a random player ap is chosen to be active in a given state.
eq_combine_xgroup( g, eq = g[["eq"]], ap.col = ifelse(has.col(eq, "ap"), "ap", NA) )
eq_combine_xgroup( g, eq = g[["eq"]], ap.col = ifelse(has.col(eq, "ap"), "ap", NA) )
g |
the game object |
eq |
the equilibrium, by default the last solved eq of g. |
ap.col |
the column as a character in x.df that is the index of the active player. By default "ap". |
The active player in a state x is defined by the variable ap in x.df and the original state by xgroup.
This function aggregates equilibrium outcomes from x to xgroup. For payoffs r1,r2,v1,v2 and U we take the mean over the payoffs given the two possible actvive players.
The columns move.adv1 and move.adv2 describe the difference in negotiation payoffs of a player when is the active player who can make a move compared to the other player being active.
Finally we create action labels by combining the actions chosen when a player is active.
Draws an arrow from state x to state y if and only if on the equilibrium path there is a positive probability to directly transist from x to y.
eq_diagram( g, show.own.loop = FALSE, show.terminal.loop = FALSE, use.x = NULL, just.eq.chain = FALSE, x0 = g$sdf$x[1], hide.passive.edge = TRUE, label.fun = NULL, tooltip.fun = NULL, active.edge.color = "#000077", passive.edge.color = "#dddddd", add.passive.edge = TRUE, passive.edge.width = 1, return.dfs = FALSE, eq = g[["eq"]], font.size = 24, font = paste0(font.size, "px Arial black") )
eq_diagram( g, show.own.loop = FALSE, show.terminal.loop = FALSE, use.x = NULL, just.eq.chain = FALSE, x0 = g$sdf$x[1], hide.passive.edge = TRUE, label.fun = NULL, tooltip.fun = NULL, active.edge.color = "#000077", passive.edge.color = "#dddddd", add.passive.edge = TRUE, passive.edge.width = 1, return.dfs = FALSE, eq = g[["eq"]], font.size = 24, font = paste0(font.size, "px Arial black") )
g |
The solved game object |
show.own.loop |
Shall a loop from a state to itself be drawn if there is a positive probability to stay in the state? (Default=FALSE) |
show.terminal.loop |
Only relevant if |
use.x |
optionally a vector of state ids that shall only be shown. |
just.eq.chain |
If TRUE only show states that can be reached with positive probability on the equilibrium path when starting from state x0. |
x0 |
only relevant if |
label.fun |
An optional function that takes the equilibrium object and game and returns a character vector that contains a label for each state. |
tooltip.fun |
Similar to |
return.dfs |
if TRUE don't show diagram but only return the relevant edge and node data frames that can be used to call |
font.size |
The font size |
Draws an arrow from state x to state y if and only if on the equilibrium path there is a positive probability to directly transist from x to y.
eq_diagram_xgroup( g, show.own.loop = FALSE, show.terminal.loop = FALSE, use.x = NULL, just.eq.chain = FALSE, x0 = g$sdf$x[1], hide.passive.edge = TRUE, add.passive.edge = TRUE, label.fun = NULL, tooltip.fun = NULL, active.edge.color = "#000077", passive.edge.color = "#dddddd", passive.edge.width = 1, return.dfs = FALSE, eq = g[["eq"]], ap.col = if (has.col(eq, "ap")) "ap" else NA, font.size = 24, font = paste0(font.size, "px Arial black") )
eq_diagram_xgroup( g, show.own.loop = FALSE, show.terminal.loop = FALSE, use.x = NULL, just.eq.chain = FALSE, x0 = g$sdf$x[1], hide.passive.edge = TRUE, add.passive.edge = TRUE, label.fun = NULL, tooltip.fun = NULL, active.edge.color = "#000077", passive.edge.color = "#dddddd", passive.edge.width = 1, return.dfs = FALSE, eq = g[["eq"]], ap.col = if (has.col(eq, "ap")) "ap" else NA, font.size = 24, font = paste0(font.size, "px Arial black") )
g |
The solved game object |
show.own.loop |
Shall a loop from a state to itself be drawn if there is a positive probability to stay in the state? (Default=FALSE) |
show.terminal.loop |
Only relevant if |
use.x |
optionally a vector of state ids that shall only be shown. |
just.eq.chain |
If TRUE only show states that can be reached with positive probability on the equilibrium path when starting from state x0. |
x0 |
only relevant if |
label.fun |
An optional function that takes the equilibrium object and game and returns a character vector that contains a label for each state. |
tooltip.fun |
Similar to |
return.dfs |
if TRUE don't show diagram but only return the relevant edge and node data frames that can be used to call |
Get the last computed equilibrium of game g
get_eq(g, extra.cols = "ae", eq = g[["eq"]], add.vr = FALSE)
get_eq(g, extra.cols = "ae", eq = g[["eq"]], add.vr = FALSE)
Returns for all discount factors the optimal simple strategy profiles maximum joint payoffs and punishment profiles
get_repgames_results( g, action.details = TRUE, delta = g$param$delta, rho = g$param$rho )
get_repgames_results( g, action.details = TRUE, delta = g$param$delta, rho = g$param$rho )
Get the last computed RNE of game g
get_rne(g, extra.cols = "ae", eq = g[["rne"]])
get_rne(g, extra.cols = "ae", eq = g[["rne"]])
Retrieve more details about the last computed RNE
get_rne_details(g, x = NULL)
get_rne_details(g, x = NULL)
Get the last computed SPE of game g
get_spe(g, extra.cols = "ae", eq = g[["spe"]])
get_spe(g, extra.cols = "ae", eq = g[["spe"]])
save.history = TRUE
Get the intermediate steps in from t = T to t = 1 for
a T-RNE or capped RNE that has been solved with
save.history = TRUE
get_T_rne_history(g)
get_T_rne_history(g)
To be used as argument of irv_joint_dist
irv(var, ..., default = NULL, lower = NULL, upper = NULL, vals.unique = TRUE)
irv(var, ..., default = NULL, lower = NULL, upper = NULL, vals.unique = TRUE)
See vignette for examples
See vignette for examples
irv_joint_dist( df, ..., enclos = parent.frame(), remove.zero.prob = TRUE, prob.var = "prob" )
irv_joint_dist( df, ..., enclos = parent.frame(), remove.zero.prob = TRUE, prob.var = "prob" )
To be used as argument of irv
irv_val(val, prob)
irv_val(val, prob)
See vignette for examples
Show a base R plot of equilibrium payoff set
plot_eq_payoff_set( g, x = eq$x[1], t = 1, eq = if (use.vr) get_eq(g, add.vr = TRUE) else g[["eq"]], xlim = NULL, ylim = NULL, add = FALSE, plot.r = TRUE, alpha = 0.8, black.border = TRUE, add.state.label = is.null(labels), labels = NULL, colors = c("#377EB8", "#E41A1C", "#4DAF4A", "#984EA3", "#FF7F00", "#FFFF33", "#A65628", "#F781BF"), add.xlim = NULL, add.ylim = NULL, extend.lim.perc = 0.05, use.vr = FALSE, ... )
plot_eq_payoff_set( g, x = eq$x[1], t = 1, eq = if (use.vr) get_eq(g, add.vr = TRUE) else g[["eq"]], xlim = NULL, ylim = NULL, add = FALSE, plot.r = TRUE, alpha = 0.8, black.border = TRUE, add.state.label = is.null(labels), labels = NULL, colors = c("#377EB8", "#E41A1C", "#4DAF4A", "#984EA3", "#FF7F00", "#FFFF33", "#A65628", "#F781BF"), add.xlim = NULL, add.ylim = NULL, extend.lim.perc = 0.05, use.vr = FALSE, ... )
g |
The game object for which an equilibrium has been solved |
x |
A character vector of the state(s) for which the (continuation) equilibrium payoff set shall be shown. By default only the first stage. |
eq |
An equilibrium object. By default the last solved equilibrium. |
xlim |
as in |
ylim |
as in |
add |
as in |
plot.r |
Shall negotiation payoffs be shown as a point on the Pareto-frontier (default = TRUE) |
alpha |
opacity of the fill color |
Fix action profiles for the equilibrium path (ae) and during punishment (a1.hat and a2.hat) that are assumed to be played after the cap in period T onwards. The punishment profile a1.hat is the profile in which player 1 already plays a best-reply (in a1 he might play a non-best reply). From the specified action profiles in all states, we can compute the relevant after-cap payoffs U(x), v1(x) and v2(x) assuming that state transitions would continue.
rel_after_cap_actions(g, x = NA, ae, a1.hat, a2.hat, x.T = NA)
rel_after_cap_actions(g, x = NA, ae, a1.hat, a2.hat, x.T = NA)
g |
a relational contracting game created with rel_game |
x |
The state(s) for which this after-cap payoff set is applied. If NA (default) and also x.T is NA, it applies to all states. |
ae |
A named list that specifies the equilibrum action profiles. |
a1.hat |
A named list that specifies the action profile when player 1 is punished. |
a2.hat |
A named list that specifies the action profile when player 2 is punished. |
x.T |
Instead of specifiying the argument x, we can specify as x.T a name of the after-cap state. This can be refereed to as the argument x.T in rel_state and rel_states |
Returns the updated game
Specify the SPE payoff set(s) of the truncated game(s) after a cap in period T. While we could specify a complete repeated game that is played after the cap, it also suffices to specify just an SPE payoff set of the truncated game of the after-cap state.
rel_after_cap_payoffs( g, x = NA, U, v1 = NA, v2 = NA, v1.rep = NA, v2.rep = NA, x.T = NA )
rel_after_cap_payoffs( g, x = NA, U, v1 = NA, v2 = NA, v1.rep = NA, v2.rep = NA, x.T = NA )
g |
a relational contracting game created with rel_game |
x |
The state(s) for which this after-cap payoff set is applied. If NA (default) and also x.T is NA, it applies to all states. |
U |
The highest joint payoff in the truncated repeated game starting from period T. |
v1 |
The lowest SPE payoff of player 1 in the truncated game. These are average discounted payoffs using delta as discount factor. |
v2 |
Like v1, but for player 2. |
v1.rep |
Alternative to v1. Player 1 lowest SPE payoff in the repeated game with adjusted discount factor delta*(1-rho). Will be automatically converted into v1_trunc based on rho, delta, and bargaining weight. Are often easier to specify. |
v2.rep |
Like v1.rep, but for player 2. |
x.T |
Instead of specifiying the argument x, we can specify as x.T a name of the after-cap state. This can be refereed to as the argument x.T in rel_state and rel_states |
Returns the updated game
In a capped version of the game we assume that after period T the state cannot change anymore and always stays the same. I.e. after T periods players play a repeated game. For a given T a capped game has a unique RNE payoff. Also see rel_T_rne
.
rel_capped_rne( g, T, delta = g$param$delta, rho = g$param$rho, adjusted.delta = NULL, beta1 = g$param$beta1, tie.breaking = c("equal_r", "slack", "random", "first", "last", "max_r1", "max_r2", "unequal_r")[1], tol = 1e-12, add.iterations = FALSE, save.details = FALSE, save.history = FALSE, use.cpp = TRUE, T.rne = FALSE, spe = NULL, res.field = "eq" )
rel_capped_rne( g, T, delta = g$param$delta, rho = g$param$rho, adjusted.delta = NULL, beta1 = g$param$beta1, tie.breaking = c("equal_r", "slack", "random", "first", "last", "max_r1", "max_r2", "unequal_r")[1], tol = 1e-12, add.iterations = FALSE, save.details = FALSE, save.history = FALSE, use.cpp = TRUE, T.rne = FALSE, spe = NULL, res.field = "eq" )
g |
The game |
T |
The number of periods in which new negotiations can take place. |
delta |
the discount factor |
rho |
the negotiation probability |
adjusted.delta |
the adjusted discount factor (1-rho)*delta. Can be specified instead of delta. |
beta1 |
the bargaining weight of player 1. By default equal to 0.5. Can also be initially specified with |
tie.breaking |
A tie breaking rule when multiple action profiles could be implemented on the equilibrium path with same joint payoff U. Can take the following values:
|
tol |
Due to numerical inaccuracies the calculated incentive constraints for some action profiles may be vialoated even though with exact computation they should hold, yielding unexpected results. We therefore also allow action profiles whose numeric incentive constraints is violated by not more than tol. By default we have |
add.iterations |
if TRUE just add T iterations to the previously computed capped RNE or T-RNE. |
save.details |
if set TRUE details of the equilibrium are saved that can be analysed later by calling |
save.history |
saves the values for intermediate T. |
Add parameters to a relational contracting game
rel_change_param(g, ...)
rel_change_param(g, ...)
g |
a relational contracting game created with rel_game |
... |
other parameters that can e.g. be used in payoff functions |
delta |
The discount factor |
rho |
The negotiation probability |
Returns the updated game
Compiles a relational contracting game
rel_compile(g, ..., compute.just.static = FALSE)
rel_compile(g, ..., compute.just.static = FALSE)
By default equilibrium payoffs are given as average discounted payoffs. This is the discounted sum of payoffs multiplied by (1-delta).
rel_eq_as_discounted_sums(g)
rel_eq_as_discounted_sums(g)
Call this function after you have solved an equilibrium if you want to present the equilibrium
@param g the game for which an equilibrium was computedayoffs as the discounted sum of payoffs instead.
We compute the "equilibrium" play that would maximize joint payoffs if incentive constraints could be completely ignored.
rel_first_best(g, delta = g$param$delta, ...)
rel_first_best(g, delta = g$param$delta, ...)
g |
the game object |
delta |
The discount factor |
... |
additional parameters of |
Note that we create the same columns as for a spe, e.g.
punishment payoffs v1 and v2 that would arise if every
action profile could be implemented as punishment.
This allows to use the same functions, like eq_diagram
as for equilibria.
Creates a new relational contracting game
rel_game(name = "Game", ..., enclos = parent.frame())
rel_game(name = "Game", ..., enclos = parent.frame())
We simply solve the truncated game with r1 and r2 and check whether the resultig r1 and r2 are the same
rel_is_eq_rne( g, eq = g[["eq"]], r1 = eq$r1, r2 = eq$r2, r.tol = 1e-10, verbose = TRUE )
rel_is_eq_rne( g, eq = g[["eq"]], r1 = eq$r1, r2 = eq$r2, r.tol = 1e-10, verbose = TRUE )
Returns a game object that contains the mpe. Use the function get_mpe to retrieve a data frame that describes the MPE.
rel_mpe( g, delta = g$param$delta, static.eq = NULL, max.iter = 100, tol = 1e-08, a.init.guess = NULL )
rel_mpe( g, delta = g$param$delta, static.eq = NULL, max.iter = 100, tol = 1e-08, a.init.guess = NULL )
g |
the game |
delta |
the discount factor |
max.iter |
maximum number of iterations |
tol |
we finish if payoffs in a subsequent iteration don't change by more than tol |
a.init.guess |
optionaly an initially guess of the action profiles. A vector of size nx (number of states) that describes for each state the integer index of the action profile. For a game g look at 'g$ax.grid' to find the indeces of the desired action profiles. |
Set some game options
rel_options(g, lab.action.sep = " ", lab.player.sep = " | ")
rel_options(g, lab.action.sep = " ", lab.player.sep = " | ")
Add parameters to a relational contracting game
rel_param( g, ..., delta = non.null(param[["delta"]], 0.9), rho = non.null(param[["rho"]], 0), beta1 = non.null(param[["beta1"]], 1/2), param = g[["param"]] )
rel_param( g, ..., delta = non.null(param[["delta"]], 0.9), rho = non.null(param[["rho"]], 0), beta1 = non.null(param[["beta1"]], 1/2), param = g[["param"]] )
g |
a relational contracting game created with rel_game |
... |
other parameters that can e.g. be used in payoff functions |
delta |
The discount factor |
rho |
The negotiation probability |
Returns the updated game
If the game is strongly directional, i.e. non-terminal states will be reached at most once, there exists a unique RNE payoff.
rel_rne( g, delta = g$param$delta, rho = g$param$rho, adjusted.delta = NULL, beta1 = g$param$beta1, verbose = TRUE, ... )
rel_rne( g, delta = g$param$delta, rho = g$param$rho, adjusted.delta = NULL, beta1 = g$param$beta1, verbose = TRUE, ... )
g |
The game object |
delta |
the discount factor |
rho |
the negotiation probability |
adjusted.delta |
the adjusted discount factor (1-rho)*delta. Can be specified instead of delta. |
beta1 |
the bargaining weight of player 1. By default equal to 0.5. Can also be initially specified with |
verbose |
if |
For weakly directional games no RNE or multiple RNE payoffs may exist.
You can call rel_capped_rne to solve a capped version of the game that allows state changes only up to some period T. Such a capped version always has a unique RNE payoff.
Scale equilibrium payoffs
rel_scale_eq_payoffs(g, factor)
rel_scale_eq_payoffs(g, factor)
g |
the game for which an equilibrium was computed |
factor |
the factor by which the payoffs U,v1,v2,r1 and r2 are multiplied |
Solves for all specified states the repeated game assuming the state is fixed
rel_solve_repgames( g, x = g$sdf$x, overwrite = FALSE, rows = match(x, g$sdf$x), use.repgame.package = FALSE )
rel_solve_repgames( g, x = g$sdf$x, overwrite = FALSE, rows = match(x, g$sdf$x), use.repgame.package = FALSE )
Returns a game object that contains a field 'rep.games.df'. This data frame contains the relevant information to compute equilibrium payoffs and equilibria for all discount factors for all states.
Finds an optimal simple subgame perfect equilibrium of g. From this the whole SPE payoff set can be deduced.
rel_spe( g, delta = g$param$delta, tol.feasible = 1e-10, no.exist.action = c("warn", "stop", "nothing"), verbose = FALSE, r1 = NULL, r2 = NULL, rho = g$param$rho, add.action.labels = TRUE, max.iter = 10000, first.best = FALSE )
rel_spe( g, delta = g$param$delta, tol.feasible = 1e-10, no.exist.action = c("warn", "stop", "nothing"), verbose = FALSE, r1 = NULL, r2 = NULL, rho = g$param$rho, add.action.labels = TRUE, max.iter = 10000, first.best = FALSE )
g |
the game object |
delta |
The discount factor. By default the discount factor specified in |
tol.feasible |
Due to numerical inaccuracies, sometimes incentive constraints which theoretically should exactly hold, seem to be violated. To avoid this problem, we will consider all action profiles feasible whose incentive constraint is not violated by more then |
no.exist.action |
What shall be done if no pure SPE exists? Default is |
verbose |
if |
r1 |
(or |
rho |
Only relevant if r1 and r2 are not null. In that case the negotiation probability. |
Adds a column state.prob
to the computed equilibrium data frame,
which you can retrieve by calling get_eq
.
rel_state_probs( g, x0 = c("equal", "first", "first.group")[1], start.prob = NULL, n.burn.in = 100, n.averaging = 100, tol = 1e-13, eq.field = "eq" )
rel_state_probs( g, x0 = c("equal", "first", "first.group")[1], start.prob = NULL, n.burn.in = 100, n.averaging = 100, tol = 1e-13, eq.field = "eq" )
g |
the game object for which an equilibrium has been solved |
x0 |
the initial state, by default the first state. If |
start.prob |
an optional vector of probabilities that specifies for each state the probability that the game starts in that state. Overwrites "x0" unless kept NULL. |
n.burn.in |
Number of rounds before probabilities are averaged. |
n.averaging |
Number of rounds for which probabilities are averaged. |
tol |
Tolerance such that computation stops already in burn-in phase if transition probabilities change not by more than tol. |
If the equilibrium strategy induces a unique stationary distribution over the states, this distribution should typically be found (or at least approximated). Otherwise the result can depend on the parameters.
The initial distribution of states is determined by the parameters
x0
or start.prob
. We then multiply
the current probabilities susequently n.burn.in
times with
the transitition matrix on the equilibrium path.
This yields the probability distribution over states assuming
the game is played for n.burn.in
periods.
We then continue the process for n.averaging
rounds, and return
the mean of the state probability vectors over these number of rounds.
If between two rounds in the burn-in phase the transitition probabilities
of no state pair change by more than the parameter tol
, we
immediately stop and use the resulting probabilit vector.
Note that for T-RNE or capped RNE we always take the transition probabilities of the first period, i.e. we don't increase the t in the actual state definition.
Add one or multiple states. Allows to specify action spaces, payoffs and state transitions via functions
rel_states( g, x, A1 = NULL, A2 = NULL, pi1, pi2, A.fun = NULL, pi.fun = NULL, trans.fun = NULL, static.A1 = NULL, static.A2 = NULL, static.A.fun = NULL, static.pi1, static.pi2, static.pi.fun = NULL, x.T = NULL, pi1.form, pi2.form, ... ) rel_state( g, x, A1 = NULL, A2 = NULL, pi1, pi2, A.fun = NULL, pi.fun = NULL, trans.fun = NULL, static.A1 = NULL, static.A2 = NULL, static.A.fun = NULL, static.pi1, static.pi2, static.pi.fun = NULL, x.T = NULL, pi1.form, pi2.form, ... )
rel_states( g, x, A1 = NULL, A2 = NULL, pi1, pi2, A.fun = NULL, pi.fun = NULL, trans.fun = NULL, static.A1 = NULL, static.A2 = NULL, static.A.fun = NULL, static.pi1, static.pi2, static.pi.fun = NULL, x.T = NULL, pi1.form, pi2.form, ... ) rel_state( g, x, A1 = NULL, A2 = NULL, pi1, pi2, A.fun = NULL, pi.fun = NULL, trans.fun = NULL, static.A1 = NULL, static.A2 = NULL, static.A.fun = NULL, static.pi1, static.pi2, static.pi.fun = NULL, x.T = NULL, pi1.form, pi2.form, ... )
g |
a relational contracting game created with rel_game |
x |
The names of the states |
A1 |
The action set of player 1. A named list, like |
A2 |
The action set of player 2. See A1. |
pi1 |
Player 1's payoff. (Non standard evaluation) |
pi2 |
Player 2's payoff. (Non standard evaluation) |
A.fun |
Alternative to specify A1 and A2, a function that returns action sets. |
pi.fun |
Alternative to specify pi1 and pi2 as formula. A vectorized function that returns payoffs directly for all combinations of states and action profiles. |
trans.fun |
A function that specifies state transitions |
x.T |
Relevant when solving a capped game. Which terminal state shall be set in period T onwards. By default, we stay in state x. |
pi1.form |
Player 1's payoff as formula with standard evaluation |
pi2.form |
Player 2's payoff as formula with standard evaluation |
Returns the updated game
rel_state
: rel_state is just a synonym for the rel_states. You may want to use it if you specify just a single state.
The idea of a T-RNE is that only for a finite number of T periods relational contracts will be newly negoatiated. After T periods no new negotiations take place, i.e. every SPE continuation payoff can be implemented. For fixed T there is a unique RNE payoff.
rel_T_rne( g, T, delta = g$param$delta, rho = g$param$rho, adjusted.delta = NULL, beta1 = g$param$beta1, tie.breaking = c("equal_r", "slack", "random", "first", "last", "max_r1", "max_r2", "unequal_r")[1], tol = 1e-12, save.details = FALSE, add.iterations = FALSE, save.history = FALSE, use.cpp = TRUE, spe = g[["spe"]], res.field = "eq" )
rel_T_rne( g, T, delta = g$param$delta, rho = g$param$rho, adjusted.delta = NULL, beta1 = g$param$beta1, tie.breaking = c("equal_r", "slack", "random", "first", "last", "max_r1", "max_r2", "unequal_r")[1], tol = 1e-12, save.details = FALSE, add.iterations = FALSE, save.history = FALSE, use.cpp = TRUE, spe = g[["spe"]], res.field = "eq" )
g |
The game |
T |
The number of periods in which new negotiations can take place. |
delta |
the discount factor |
rho |
the negotiation probability |
adjusted.delta |
the adjusted discount factor (1-rho)*delta. Can be specified instead of delta. |
beta1 |
the bargaining weight of player 1. By default equal to 0.5. Can also be initially specified with |
tie.breaking |
A tie breaking rule when multiple action profiles could be implemented on the equilibrium path with same joint payoff U. Can take the following values:
|
tol |
Due to numerical inaccuracies the calculated incentive constraints for some action profiles may be vialoated even though with exact computation they should hold, yielding unexpected results. We therefore also allow action profiles whose numeric incentive constraints is violated by not more than tol. By default we have |
save.details |
if set TRUE details of the equilibrium are saved that can be analysed later by calling |
add.iterations |
if TRUE just add T iterations to the previously computed capped RNE or T-RNE. |
save.history |
saves the values for intermediate T. |
trans.fun
of link{rel_states}
instead.Add a state transition from one state to one or several states. For more complex games, it may be preferable to use the arguments trans.fun
of link{rel_states}
instead.
rel_transition(g, xs, xd, ..., prob = 1)
rel_transition(g, xs, xd, ..., prob = 1)
g |
a relational contracting game created with rel_game |
xs |
Name(s) of source states |
xd |
Name(s) of destination states |
... |
named action and their values |
prob |
transition probability |
Returns the updated game