'전체 글'에 해당되는 글 1802건

  1. 2019.06.05 019년 8월 1일부터 Google Play에 게시되는 앱에서는 64비트 아키텍처를 지원해야 합니다
  2. 2019.06.05 [ml-agent] Imitation Learning
  3. 2019.06.05 [ml-agent] Build-in reinforcement learning
  4. 2019.06.04 Training-Imitation-Learning.
  5. 2019.06.04 이동/회전
  6. 2019.06.04 IL2CPP

019년 8월 1일부터 Google Play에 게시되는 앱에서는 64비트 아키텍처를 지원해야 합니다

Unity3D 2019. 6. 5. 11:38
반응형

64비트 빌드를 해야함 
https://developer.android.com/distribute/best-practices/develop/64-bit
2019년 8월 1일부터 Google Play에 게시되는 앱에서는 64비트 아키텍처를 지원해야 합니다. 64비트 CPU는 사용자에게 더 빠르고 풍부한 환경을 제공합니다. 앱의 64비트 버전을 추가하면 성능이 향상되고 향후 혁신을 이룰 가능성이 높아지며 64비트 전용 하드웨어가 장착된 기기에 대응할 수 있습니다.

유니티 에디터 Player Setting > Target Architectures (ARMv7, x86) 둘다 체크 할것

 

 

참고: https://blogs.unity3d.com/kr/2019/03/05/android-support-update-64-bit-and-app-bundles-backported-to-2017-4-lts/

 

반응형

'Unity3D' 카테고리의 다른 글

ASTC : 가성비 좋은 텍스처 압축 포맷  (0) 2019.06.10
Assetbundle  (0) 2019.06.10
Training-Imitation-Learning.  (0) 2019.06.04
이동/회전  (0) 2019.06.04
IL2CPP  (0) 2019.06.04
:

[ml-agent] Imitation Learning

Unity3D/ml-agent 2019. 6. 5. 09:33
반응형

https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Training-Imitation-Learning.md

 

Unity-Technologies/ml-agents

Unity Machine Learning Agents Toolkit. Contribute to Unity-Technologies/ml-agents development by creating an account on GitHub.

github.com

 

반응형
:

[ml-agent] Build-in reinforcement learning

Unity3D/ml-agent 2019. 6. 5. 09:30
반응형

안드로이드 플랫폼 빌드후 기기에 설치 하고 강화 학습을 시도해보았다.

 

여러경로를 통해 현재는 불가능 하다는것을 알았다.

 

PC, Mac & Linux Standalone platform으로 빌드를 했다.

 

다음과 같이 실행파일이 생성되었다.

 

파일 경로는 다음과 같다.

C:\Users\smilejsu\Desktop\test-mlagent.exe

 

Anaconda Prompt 를 사용하여 다음 명령어를 통해 훈련을 시도 해보았다.

(base) C:\Users\smilejsu>activate ml-agents

(ml-agents) C:\Users\smilejsu>d:

(ml-agents) D:\>cd D:\workspace\unity\Test\UnitySDK

(ml-agents) D:\workspace\unity\Test\UnitySDK>mlagents-learn trainer_config.yaml --env=C:\Users\smilejsu\Desktop\test-mlagent.exe --run-id=train --train


                        ▄▄▄▓▓▓▓
                   ╓▓▓▓▓▓▓█▓▓▓▓▓
              ,▄▄▄m▀▀▀'  ,▓▓▓▀▓▓▄                           ▓▓▓  ▓▓▌
            ▄▓▓▓▀'      ▄▓▓▀  ▓▓▓      ▄▄     ▄▄ ,▄▄ ▄▄▄▄   ,▄▄ ▄▓▓▌▄ ▄▄▄    ,▄▄
          ▄▓▓▓▀        ▄▓▓▀   ▐▓▓▌     ▓▓▌   ▐▓▓ ▐▓▓▓▀▀▀▓▓▌ ▓▓▓ ▀▓▓▌▀ ^▓▓▌  ╒▓▓▌
        ▄▓▓▓▓▓▄▄▄▄▄▄▄▄▓▓▓      ▓▀      ▓▓▌   ▐▓▓ ▐▓▓    ▓▓▓ ▓▓▓  ▓▓▌   ▐▓▓▄ ▓▓▌
        ▀▓▓▓▓▀▀▀▀▀▀▀▀▀▀▓▓▄     ▓▓      ▓▓▌   ▐▓▓ ▐▓▓    ▓▓▓ ▓▓▓  ▓▓▌    ▐▓▓▐▓▓
          ^█▓▓▓        ▀▓▓▄   ▐▓▓▌     ▓▓▓▓▄▓▓▓▓ ▐▓▓    ▓▓▓ ▓▓▓  ▓▓▓▄    ▓▓▓▓`
            '▀▓▓▓▄      ^▓▓▓  ▓▓▓       └▀▀▀▀ ▀▀ ^▀▀    `▀▀ `▀▀   '▀▀    ▐▓▓▌
               ▀▀▀▀▓▄▄▄   ▓▓▓▓▓▓,                                      ▓▓▓▓▀
                   `▀█▓▓▓▓▓▓▓▓▓▌
                        ¬`▀▀▀█▓


INFO:mlagents.trainers:{'--base-port': '5005',
 '--curriculum': 'None',
 '--debug': False,
 '--docker-target-name': 'None',
 '--env': 'C:\\Users\\smilejsu\\Desktop\\test-mlagent.exe',
 '--help': False,
 '--keep-checkpoints': '5',
 '--lesson': '0',
 '--load': False,
 '--no-graphics': False,
 '--num-envs': '1',
 '--num-runs': '1',
 '--run-id': 'train',
 '--save-freq': '50000',
 '--seed': '-1',
 '--slow': False,
 '--train': True,
 '<trainer-config-path>': 'trainer_config.yaml'}
c:\users\smilejsu\appdata\local\conda\conda\envs\ml-agents\lib\site-packages\mlagents\trainers\learn.py:141: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
  trainer_config = yaml.load(data_file)
INFO:mlagents.envs:
'TestAcademy' started successfully!
Unity Academy name: TestAcademy
        Number of Brains: 1
        Number of Training Brains : 1
        Reset Parameters :

Unity brain name: RollerBallBrain
        Number of Visual Observations (per agent): 0
        Vector Observation space size (per agent): 6
        Number of stacked Vector Observation: 1
        Vector Action space type: continuous
        Vector Action space size (per agent): [2]
        Vector Action descriptions: ,
2019-06-05 09:23:09.796920: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
INFO:mlagents.envs:Hyperparameters for the PPO Trainer of brain RollerBallBrain:
        batch_size:     10
        beta:   0.005
        buffer_size:    100
        epsilon:        0.2
        gamma:  0.99
        hidden_units:   128
        lambd:  0.95
        learning_rate:  0.0003
        max_steps:      5.0e4
        normalize:      False
        num_epoch:      3
        num_layers:     2
        time_horizon:   64
        sequence_length:        64
        summary_freq:   1000
        use_recurrent:  False
        summary_path:   ./summaries/train-0_RollerBallBrain
        memory_size:    256
        use_curiosity:  False
        curiosity_strength:     0.01
        curiosity_enc_size:     128
        model_path:     ./models/train-0/RollerBallBrain
c:\users\smilejsu\appdata\local\conda\conda\envs\ml-agents\lib\site-packages\numpy\core\fromnumeric.py:2957: RuntimeWarning: Mean of empty slice.
  out=out, **kwargs)
c:\users\smilejsu\appdata\local\conda\conda\envs\ml-agents\lib\site-packages\numpy\core\_methods.py:80: RuntimeWarning: invalid value encountered in double_scalars
  ret = ret.dtype.type(ret / rcount)
INFO:mlagents.trainers: train-0: RollerBallBrain: Step: 1000. Time Elapsed: 8.189 s Mean Reward: -0.976. Std of Reward: 0.623. Training.
INFO:mlagents.trainers: train-0: RollerBallBrain: Step: 2000. Time Elapsed: 15.860 s Mean Reward: -1.394. Std of Reward: 1.200. Training.
INFO:mlagents.trainers: train-0: RollerBallBrain: Step: 3000. Time Elapsed: 23.409 s Mean Reward: -0.484. Std of Reward: 1.027. Training.
INFO:mlagents.trainers: train-0: RollerBallBrain: Step: 4000. Time Elapsed: 30.629 s Mean Reward: -0.551. Std of Reward: 0.405. Training.
INFO:mlagents.trainers: train-0: RollerBallBrain: Step: 5000. Time Elapsed: 37.751 s Mean Reward: -0.378. Std of Reward: 0.480. Training.

 

다음과 같이 실행파일 (test-mlagent.exe)이 자동으로 실행 되며 훈련이 잘 진행 되었다.

 

 

참고:

https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Learning-Environment-Executable.md

 

Unity-Technologies/ml-agents

Unity Machine Learning Agents Toolkit. Contribute to Unity-Technologies/ml-agents development by creating an account on GitHub.

github.com

https://github.com/Unity-Technologies/ml-agents/issues/2099

 

How do I train brain after building mobile with ml-agent? · Issue #2099 · Unity-Technologies/ml-agents

Hi. I am a developer who is enjoying ml-agent nowadays. I am trying to follow the documentation here. (https://github.com/Unity-Technologies/ml-agents) I am currently using Windows 10. So I use mla...

github.com

학습이 끝나면 .nn파일도 만들어 졌으며 동적으로 훈련된 모델을 Brain에 넣어 실행 시키는것도 가능했다.

INFO:mlagents.envs:Saved Model
INFO:mlagents.trainers:List of nodes to export for brain :RollerBallBrain
INFO:mlagents.trainers: is_continuous_control
INFO:mlagents.trainers: version_number
INFO:mlagents.trainers: memory_size
INFO:mlagents.trainers: action_output_shape
INFO:mlagents.trainers: action
INFO:mlagents.trainers: action_probs
INFO:mlagents.trainers: value_estimate
INFO:tensorflow:Restoring parameters from ./models/train-0/RollerBallBrain\model-15901.cptk
INFO:tensorflow:Froze 17 variables.
Converted 17 variables to const ops.
Converting ./models/train-0/RollerBallBrain/frozen_graph_def.pb to ./models/train-0/RollerBallBrain.nn
IGNORED: StopGradient unknown layer
GLOBALS: 'is_continuous_control', 'version_number', 'memory_size', 'action_output_shape'
IN: 'vector_observation': [-1, 1, 1, 6] => 'main_graph_0/hidden_0/BiasAdd'
IN: 'vector_observation': [-1, 1, 1, 6] => 'main_graph_1/hidden_0/BiasAdd'
IN: 'epsilon': [-1, 1, 1, 2] => 'mul'
OUT: 'action', 'action_probs', 'value_estimate'
DONE: wrote ./models/train-0/RollerBallBrain.nn file.
INFO:mlagents.trainers:Exported ./models/train-0/RollerBallBrain.nn file

(ml-agents) D:\workspace\unity\Test\UnitySDK>

 

Android platform 빌드후 기기(안드로이드)에 apk를 설치 하고 앱을 실행시켜 강화 학습을 할수 있는지 궁금하다 

 

 

 

 

 

 

반응형
:

Training-Imitation-Learning.

Unity3D 2019. 6. 4. 18:56
반응형

https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Training-Imitation-Learning.md

 

Unity-Technologies/ml-agents

Unity Machine Learning Agents Toolkit. Contribute to Unity-Technologies/ml-agents development by creating an account on GitHub.

github.com

 

반응형
:

이동/회전

Unity3D 2019. 6. 4. 18:00
반응형

https://m.blog.naver.com/PostView.nhn?blogId=ocy1011&logNo=220721280305&proxyReferer=https%3A%2F%2Fwww.google.com%2F

 

[유니티 강의]오브젝트의 회전

이번 시간에는 방향키 입력에 따라 오브젝트가 회전하게 만들어봅시다. 현재 플레이어는 앞과 뒤의 차이가 ...

blog.naver.com

 

반응형
:

IL2CPP

Unity3D 2019. 6. 4. 15:41
반응형

IL2CPP(C++로 변환하는 중간 언어)는 Unity에서 개발한 스크립팅 백엔드로, 여러 플랫폼용 프로젝트를 빌드할 때 Mono 대신 사용할 수 있습니다. IL2CPP를 사용하여 프로젝트를 빌드할 때 Unity가 스크립트와 어셈블리의 IL 코드를 C++ 코드로 변환한 후에 선택한 플랫폼에 적합한 네이티브 바이너리 파일(예: .exe, apk, .xap)을 만듭니다. IL2CPP는 Unity 프로젝트의 성능, 보안 및 플랫폼 호환성을 개선하는 등의 용도로 사용됩니다.

 


IL(Intermediate Language)코드를 c++형태로 변환 하는 프로그램 

AOT(Ahead Of Time) : 
실행 시간 이전에 이루어진 컴파일 
컴파일 타임에 중간 언어로 번역한다음 중간언어를 기계어로 번역 한다.

JIT(Just-In-Time) 컴파일러 :
실행 시간 이전에 컴파일한 내용을 가상 머신 코드(IL Code)로 저장해놓고 
컴포넌트 사용시 가상 머신 코드를 기계어로 바꾸어 명령어가 실행되도록 한다.

IOS는 LLVM을 지원하는데 LLVM은 JIT컴파일러를 지원하지 않기 때문에 IOS에서
JIT컴파일러대신 AOT 컴파일러 방식을 도입 

AOT 컴파일러가 컴파일 시간에 중간 언어로 번역하고 
LLVM이 중간 언어를 네이티브 코드 (CPU나 OS가 직업 실행할수 있는 코드, ex: 기계어)로 만들어 낸다.

NDK(Native Development Kit)는 Android에서 C 및 C++ 코드를 사용할 수 있게 해주는 일련의 도구 모음으로, 네이티브 액티비티를 관리하고 센서 및 터치 입력과 같은 물리적 기기 구성요소에 액세스하는 데 사용할 수 있는 플랫폼 라이브러리를 제공합니다. 

안드로이드 유니티 IL2CPP 게임 해킹 : http://blog.naver.com/linears_/221395979775

c#, IL 그리고 Native : http://www.csharpstudy.com/DevNote/Article/22

c# 컴파일 그리고 il2cpp : https://blogs.unity3d.com/kr/2015/09/22/kr-csharp-compile-il2cpp/

An introduction to IL2CPP internals : https://blogs.unity3d.com/2015/05/06/an-introduction-to-ilcpp-internals/

Unity 매뉴얼 IL2CPP : https://docs.unity3d.com/kr/2018.1/Manual/IL2CPP.html



반응형

'Unity3D' 카테고리의 다른 글

Training-Imitation-Learning.  (0) 2019.06.04
이동/회전  (0) 2019.06.04
에셋번들 실전 가이드 (AssetBundle Best Practices)  (0) 2019.06.03
텐서보드 사용법  (0) 2019.05.19
ImportError: Could not find 'cudart64_90.dll'.  (0) 2019.05.19
: