Data Intelligence, Business Analytics

I'm conducting a simulation analysis of how different system characteristics of a casualty evacuation aircraft (speed, capacity, range, number of aircraft) affect system performance (delays, timing through different levels of care etc). The purpose of the study is not to determine the optimal or near optimal set of characteristics, but to determine the marginal benefit or penalty of each aircraft system characteristic to the patient compared to the current system, so that if and when the system is funded, designed and implemented, constraints such as cost, weight, volume can be evaluated within the context of patient outcomes, and the best *affordable* and *practical* design will be chosen based on benefit to patient outcome.

My question is whether anyone has recommendations for an experimental design to handle the following factor/levels:

A-2

B-2

C-4

D-5

E-5

All of these factors are significant and are the result of a sensitivity analysis starting with 12 factors. Factors C through E have so many levels because I'm not interested in just the extreme values, I'm interested the response each marginal change in factor levels has against a baseline. From previous experience with similar output, the response is highly nonlinear in factors C through E. Two factor interactions between C through E are significant. In places the response surface is not smooth or well behaved. The above experiment needs to be repeated 7 times under different sets of assumptions. To examine every permutation in all 7 cases requires 2800 runs each replicated 40 times, which translates to about 11 days of continuous computing on a single machine. I have a lab at my disposal and can run several machines, and I have the time to do it.

BUT. Do I need to? Given the nonlinear, poorly behaved nature of the response surface, does anyone have any recommendations as to how I can sample the decision space to adequately estimate the difference between any hypothetical system within the bounds of the experiment and the current capability (which is also simulated in a separate experiment)?

Thanks for any insights,

Jon

My question is whether anyone has recommendations for an experimental design to handle the following factor/levels:

A-2

B-2

C-4

D-5

E-5

All of these factors are significant and are the result of a sensitivity analysis starting with 12 factors. Factors C through E have so many levels because I'm not interested in just the extreme values, I'm interested the response each marginal change in factor levels has against a baseline. From previous experience with similar output, the response is highly nonlinear in factors C through E. Two factor interactions between C through E are significant. In places the response surface is not smooth or well behaved. The above experiment needs to be repeated 7 times under different sets of assumptions. To examine every permutation in all 7 cases requires 2800 runs each replicated 40 times, which translates to about 11 days of continuous computing on a single machine. I have a lab at my disposal and can run several machines, and I have the time to do it.

BUT. Do I need to? Given the nonlinear, poorly behaved nature of the response surface, does anyone have any recommendations as to how I can sample the decision space to adequately estimate the difference between any hypothetical system within the bounds of the experiment and the current capability (which is also simulated in a separate experiment)?

Thanks for any insights,

Jon

Tags: