Skip to content

Latest commit

 

History

History
40 lines (29 loc) · 2.53 KB

minutes.md

File metadata and controls

40 lines (29 loc) · 2.53 KB

Meetings

Fridays 10:00 am at:

1.02.23

  • We discussed different methods and algorithms that can be used to simulate structures.
  • Leo and Mauricio present some minor scripting results on uniaxial compression example.
  • Bruno shared a reference denoting a skepticism on the ability of ML models to generalize well.
  • Bruno shared a reference of structural optimization using ML model instead of FEA analyses.
  • Bruno shared a reference of structural parameter identification using ML model instead of FEA analyses.
  • We decided to generate 1000 samples changing E, Lx, p and to predict the value of $u_x$ at $x$ = [ $L_x$ , $0$ , $0$ ]

8.02.23

  • Data visualization seems to not be useful since we have an analytic solution for the displacements
  • Mauricio shares this reference about surrogate modes to predict breast displacements field.
  • Leo shows showed scripts to generate .csv results. Features [ $L_x$ , $E$ , $p$ ] targets [ $u_x$ , $u_y$ , $u_z$ ]
  • Bruno contributes to the analytic solution of the uni-axial compression example.

08.03.23

  • We discussed possible reasons on difference between validation and train losses. We opened an issue #33 to attack this strange behavior.
  • We set the definition for the new two materials example with ONSAS #31.
  • Bruno and Mauricio discussed experimental results obtained with a foam cube by Santiago from IIMPI as a possible example to model in ONSAS.
  • Bruno agreed on the MLP model implementation.
  • Bruno shared this book
  • Leo shared XGboost tool form sklearn. And we defined to try it on the first example.

17.03.23

  • We discussed the reasons why analytic test is lower than train/validation. We conclude that inputs in the border of the training has an important impact on the analytic test error. Finally, Leo run an acceptable range.

  • We set final goals for the project, excluding XGBoost by the moment. If we get MLP results and we add sufficient detailed documentation on the obtained results then we can try XGBoost.

  • The next example will take 6000 random samples avoiding repetition and split 80/20 for train and validation.

24.03.23

  • Discuss Example 1 final takes.
  • Second Example fist impressions:
    • We define a baseline loss in order to analyze the MSE loss