[AWS RDS] Modify RDS SQL Server Standard Edition to Enterprise Edition
-AWS RDS SQL Server 스탠다드에디션에서엔터프라이즈에디션으로변경
lVersion : Amazon RDS for SQL Server
Microsoft SQL Server는다양한버전으로제공되며각버전은고유한기능, 성능및가격옵션을제공한다. 설치하는버전도특정요구사항에따라다르다. 일부고객은더높은메모리와고가용성기능을활용하기위해SQL Server용Amazon RDS Standard Edition에서Amazon RDS Enterprise Edition으로변경을원할수있다. 이번글은RDS SQL Server 스탠다드에디션에서RDS SQL Server 엔터프라이즈에디션으로업그레이드하는방법에대해서살펴본다.
업그레이드를위해서작업자는아래의권한을가지고있어야한다.
lAmazon RDS for SQL Server
lAWS Management Console 접근권한
lSQL Server Management Studio
업그레이드프로세스는아래와같은단계가포함된다.
l기존RDS SQL Server Standard Edition 인스턴스의스냅샷생성
l스냅샷을RDS SQL Server Enterprise Edition으로복원
lRDS SQL Server Enterprise 인스턴스확인
먼저콘솔을통해SQL Server용RDS 버전을수정하는방법을안내한다. 기존RDS for SQL Server 인스턴스의스냅샷을만든다음다른버전의SQL Server로복원한다. SQL Server Management Studio에서RDS 버전을확인할수있다.
1.Amazon RDS 콘솔에서데이터베이스를선택
2.데이터베이스를선택하고작업메뉴에서스냅샷생성을선택
3.스냅샷이름을입력, 스냅샷을생성
4.스냅샷페이지에서스냅샷이성공적으로생성되었는지확인하고상태가사용가능한지확인
5.스냅샷을선택하고작업메뉴에서스냅샷복원을선택
6.DB 사양에서SQL Server의새버전(SQL Server Enterprise Edition)을선택
기본적으로모든업데이트는색인된속성이수정되지않은경우에도새로생성된페이지에연결하기위해새로운색인항목을추가해야한다. HOT(Heap Only Tuple)의경우업데이트된튜플체인을유지하면서가능한경우이전튜플과동일한페이지에새튜플이생성된다. 새페이지가생성되지않았기때문에인덱스는계속해서동일한페이지를가리킬수있으며수정할필요가없다.
Error: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {“col_1”:25513237,“col_2”:8104666,“col_3”:3808,“col_4”:6705,“col_4”:“2016-01-21 08:31:33",“col_6”:42,“col_7”:“471.00”,“col_8”:null} at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:157) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:465) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:349) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168) Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row {“col_1”:25513237,“col_2”:8104666,“col_3”:3808,“col_4”:6705,“col_5”:“2016-01-21 08:31:33",“col_6”:42,“col_7”:“471.00”,“col_8”:null} at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:494) at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:148) ... 8 more Caused by: org.apache.hadoop.hive.ql.metadata.HiveFatalException: [Error 20004]: Fatal error occurred when node tried to create too many dynamic partitions. The maximum number of dynamic partitions is controlled by hive.exec.max.dynamic.partitions and hive.exec.max.dynamic.partitions.pernode. Maximum was set to 100 partitions per node, number of dynamic partitions on this node: 101 at org.apache.hadoop.hive.ql.exec.FileSinkOperator.getDynOutPaths(FileSinkOperator.java:951) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:722) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:882) at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:882) at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:130) at org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:146) at org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:484) ... 9 more
[Fatal Error] total number of created files now is 100028, which exceeds 100000. Killing the job.
set hive.exec.dynamic.partition=true; set hive.exec.dynamic.partition.mode=nonstrict; set hive.exec.max.dynamic.partitions.pernode=100000; set hive.exec.max.dynamic.partitions=100000; set hive.exec.max.created.files=900000;
MySQL DB를처음구축할때마스터DB와슬레이브DB를구성하는것이아니라면반드시마스터DB의데이터를백업하여슬레이브DB를구성하여야한다. 이때특정시점까지(Point in Time)의마스터DB를백업하면, 백업파일에는백업이완료된시점의바이너리로그파일과포지션정보가기록된다. 슬레이브DB에서는마스터DB 백업파일이후로발생된데이터변경사항을전달받아레플리케이션한다.