https://trans.onionmixer.net/mediawiki/api.php?action=feedcontributions&user=Onionmixer&feedformat=atom흡혈양파의 번역工房 - User contributions [en]2024-03-19T11:04:07ZUser contributionsMediaWiki 1.38.1https://trans.onionmixer.net/mediawiki/index.php?title=WikiTestPage&diff=5628WikiTestPage2022-06-14T06:38:11Z<p>Onionmixer: image upload test</p>
<hr />
<div>[[image:abstract_0021.jpg|none|300px|thumb]]<br />
<br />
http://www.mediawiki.org/wiki/Help:Images/ko<br />
<br />
[[MediaWiki:Geshi.css]]<br />
<br />
[[MediaWiki:Monobook.css]]<br />
<br />
[[MediaWiki:Common.css]]<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
<br />
Hottie>>phoneNumber<br />
| someLoser |<br />
someLoser := thisContext sender receiver.<br />
[someLoser perform: #getLost]<br />
on: someLoser messageNotUnderstoodSignal do: [:ignoredException | self become: BarStool? empty]<br />
<br />
</syntaxhighlight><br />
<br />
<br />
* [[Template:key press|key press]]<br />
{{key press|S-SPC}}<br />
<br />
<br />
* [[Template:HighlightGray|회색 형광펜]]<br />
글씨는 {{HighlightGray|이렇게 회색 형광펜으로 강조}} 할 수 있습니다.<br />
<br />
<br />
* [[Template:HighlightPurple|보라색 형광펜]]<br />
글씨는 {{HighlightPurple|이렇게 보라색 형광펜으로 강조}} 할 수 있습니다.<br />
<br />
<br />
* [[Template:HighlightBoldPurple|굵은글씨보라색 형광펜]]<br />
글씨는 {{HighlightBoldPurple|이렇게 굵은글씨보라색 형광펜으로 강조}} 할 수 있습니다.<br />
<br />
<br />
* [[Template:HighlightSubscript|첨자추가]]<br />
글씨는 {{HighlightSubscript|1첨자|2첨자}} 로 처리할 수 있습니다.<br />
<br />
<br />
* [[Template:SmalltalkCodeBox|스몰톡 코드상자]]<br />
글씨는 {{SmalltalkCodeBox|이렇게 스몰톡 코드상자}} 안에 넣을 수 있습니다<br />
<br />
<br />
* [[Template:HighlightBox|강조박스]]<br />
글씨는 {{HighlightBox|이렇게 박스로 강조}} 할 수 있습니다.<br />
<br />
<br />
* [[Template:ExampleBox|예제박스]]<br />
글씨는 {{ExampleBox|이렇게 예제박스로 처리}} 할 수 있습니다.<br />
<br />
<br />
* [[Template:HighlightBoldBox|굵은강조박스]]<br />
글씨는 {{HighlightBoldBox|이렇게 굵은박스로 강조}} 할 수 있습니다.<br />
<br />
<br />
* [[Template:HighlightDoubleBox|두줄강조박스]]<br />
글씨는 {{HighlightDoubleBox|이렇게 두줄박스로 강조}} 할 수 있습니다.<br />
<br />
<br />
* [[Template:HighlightBold|굵은글씨강조]]<br />
글씨는 {{HighlightBold|굵은 글씨로 강조}} 할 수 있습니다.<br />
<br />
<br />
* [[Template:HighlightBlueBoldItalic|굵은글씨강조]]<br />
글씨는 {{HighlightBlueBoldItalic|푸른 굵은 이탤릭 글씨로 강조}} 할 수 있습니다.<br />
<br />
<br />
* [[Template:CommentSqueak|스퀵용 주석]]<br />
{{CommentSqueak|스퀵용 주석이 적용된 모습입니다}}<br />
<br />
<br />
* [[Template:CommentPharo|Pharo용 주석]]<br />
{{CommentPharo|Pharo용 주석이 적용된 모습입니다}}<br />
<br />
<br />
* [[Template:CincomTop|Cincom smalltalk번역관련 top페이지]]<br />
{{CincomTop|[[image:cincom_tutorial_vwtutorial.gif|none|300px|cincom_tutorial_vwtutorial]]}}<br />
<br />
<br />
* [[Template:ObjcNoticeOne|objc notice 템플릿 첫번째]]<br />
{{ObjcNoticeOne|템플릿 테스트입니다}}<br />
<br />
<br />
* [[Template:RoundTitle|Round Title]]<br />
{{RoundTitle|Round Title 테스트입니다}}<br />
<br />
<br />
* [[Template:RoundTitleNavy|Round Title Navy]]<br />
{{RoundTitleNavy|Round Title Navy 테스트입니다}}<br />
<br />
<br />
* [[Template:RoundTitleGreen|Round Title Green]]<br />
{{RoundTitleGreen|Round Title Green 테스트입니다}}<br />
<br />
<br />
* [[Template:MiniSquareTitle|Mini Square Title]]<br />
{{MiniSquareTitle|Mini Square Title 테스트입니다}}<br />
<br />
<br />
* [[Template:GtkdExample|Gtk+ Developer Example]]<br />
{{GtkdExample|Exercise 0-0. Sample Exercise|These boxes show exercises that test your understanding of the material in the section. They can include questions, code challenges, or various other types of material.<br />
<br />
You should complete each of these exercises before proceeding, because they will help you practice the concepts you have learned throughout the current chapter and put them together with concepts from past chapters.}}<br />
<br />
<br />
* [[Template:GtkdNote|Gtk+ Developer Note]]<br />
{{GtkdNote|These boxes give important notes, tips, and cautions. It is essential that you pay attention to them, because they give you information that you will need when developing your own applications.}}<br />
<br />
<br />
* [[Template:GtkdCaution|Gtk+ Developer Caution]]<br />
{{GtkdCaution|These boxes give important notes, tips, and cautions. It is essential that you pay attention to them, because they give you information that you will need when developing your own applications.}}<br />
<br />
<br />
* [[Template:GtkdTip|Gtk+ Developer Tip]]<br />
{{GtkdTip|These boxes give important notes, tips, and cautions. It is essential that you pay attention to them, because they give you information that you will need when developing your own applications.}}<br />
<br />
<br />
* [[Template:GtkdTerminal|Gtk+ Developer Terminal Output]]<br />
{{GtkdTerminal|Textual output in the terminal is shown in a monospace font between these lines, although most output will be in the form of an image, since GTK+ is graphical.}}<br />
<br />
<br />
* [[Template:BoxHeader|Box Header]]<br />
{{BoxHeader|Box Header}}<br />
<br />
<br />
* [[Template:GNOME3Notice|Gnome 3 notice]]<br />
{{GNOME3Notice|Gnome 3 notice}}<br />
<br />
<br />
<pre><br />
<div style="padding: 0.1em 0.3em; border: 1px solid rgb(169, 169, 169); font-size: 1.1em; font-family: Arial,Helvetica,sans-serif; background-color: #a4a4a4; color: rgb(51, 51, 51); border-radius: 3px 3px 3px 3px; display: inline-block; margin: 0px .1em; text-shadow: 0px 1px 0px rgb(255, 255, 255); line-height: 1.2; white-space: nowrap; box-shadow: 0px 1px 0px rgba(160, 160, 160, 0.2), 0px 0px 0px 2px rgb(150, 150, 150) inset;">{{{1}}}</div><br />
</pre><br />
<br />
<br />
{{CommentSqueak|스퀵용 주석에 {{Template:HighlightBold|bold}}를 추가로 적용한 모습입니다}}<br />
<br />
<span style="background-color:#DDDDDD">aaaa</span><br />
<br />
<center><br />
{| align="center" style="font-weight: bold; border: 1px solid black; background-color:#BBBBBB;"<br />
|- style="text-align: center; color: white; background-color: black;"<br />
| colspan="4" | 예 4.1: aPen color: Color yellow의 평가 분해하기<br />
|-<br />
|&nbsp;||aPen color||: Color yellow||&nbsp;<br />
|-<br />
|(1)||&nbsp;||Color yellow||''"단항메시지가 먼저 발송되고"''<br />
|-<br />
|&nbsp;||&nbsp;||&rArr; aColor||&nbsp;<br />
|-<br />
|(2)||aPen color||: aColor||''"그 다음 키워드 메시지가 발송됨"''<br />
|}<br />
</center><br />
<br />
<br />
<br />
<br />
<center>정렬테스트</center><br />
<br />
주석용테스트<br />
<ref name="asdasd">그럭저럭</ref><br />
<br />
<br />
<p>[[image:abstract_0021.jpg|right|300px|thumb]]<br />
이미지 레이아웃 테스트asdasdasdasdasdasdasdasdasdasdsadasdasdasdasd<br />
asdasdasdasdasdasdsadasdasdasdasdasdasdasdasdas<br />
dasdsadasdasdasdasdasdasdasdasdasdasdsadasdasdasdasd<br />
asdasdasdasdasdasdsadasdasdasdasdasdasdasdasdasdas<br />
dsadasdasdasdasdasdasdasdasdasdasdsadasdasdasdasdasdas<br />
dasdasdasdasdsadasdasdasdasdasdasdasdasdasdasdsadasdasda<br />
sdasdasdasdasdasdasdasdsadasdasdasdasdasdasdasdasdas<br />
dasdsadasdasdasdasdasdasdasdasdasdasdsadasdasdasdasdas<br />
dasdasdasdasdasdsadasdasdasdasdasdasdasdasdasdas<br />
dsadasdasdasdasdasdasdasdasdasdasdsadasdasdasda<br />
sdasdasdasdasdasdasdsadasdasdasdasdasdasdasdasdasdasdsad<br />
asdasdasdasdasdasdasdasdasdasdsadasdasdasdasda<br />
sdasdasdasdasdasdsad</p><br />
<br />
<br />
<br />
{| class = "collapsible collapsed" width=100% style = "border-radius: 10px; -moz-border-radius: 10px; -webkit-border-radius: 10px; -khtml-border-radius: 10px; -icab-border-radius: 10px; -o-border-radius: 10px; border: 5px groove #000066;"<br />
|- style="color: white; background-color: black;"<br />
|'''클래스'''||'''설명'''<br />
|- style="vertical-align:top;"<br />
|'''http://www.mediawiki.org/wiki/Help_talk:Tables<br />
|- style="color: black; background-color: gray;"<br />
| colspan="2" |표 4.9: 라자루스의 데몬 클래스<br />
|}<br />
<br />
<br />
{| style="border: 1px solid black;"<br />
|- style="color: white; background-color: black;"<br />
|'''클래스'''||'''설명'''<br />
|- style="vertical-align:top;"<br />
|'''AA||BB<br />
|- style="color: black; background-color: gray;"<br />
| colspan="2" |표 4.9: 라자루스의 데몬 클래스<br />
|}<br />
<br />
<br />
[[image:ppp.jpg|none|thumb|image upload test]]<br />
<br />
[[image:bashtop_01.png|none|thumb|image upload test]]<br />
<br />
<br />
== 주석 ==<br />
<references /></div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=File:Bashtop_01.png&diff=5627File:Bashtop 01.png2022-06-14T06:32:07Z<p>Onionmixer: </p>
<hr />
<div></div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=TheArtandScienceofSmalltalk:Chapter_01&diff=5626TheArtandScienceofSmalltalk:Chapter 012022-06-14T05:46:55Z<p>Onionmixer: </p>
<hr />
<div>;제 1 장 시작을 위한 충고<br />
<br />
==시작을 위한 충고==<br />
<br />
제 1장은 스몰토크를 처음으로 사용하는 사람을 위해 작성되었다. 이에 앞서 하나의 과제를 (스몰토크를 학습하는) 하나 맡게 될 것인데, 이러한 작업은 혜택이 많은 동시 매우 불편할 수 있다. 이 책의 목표들 중 하나는 혜택은 늘이고 불편함을 줄이는 데에 있다. 제 1장의 목표는 당신이 올바른 방향으로 시작하도록 돕는 것이다. 객체 지향 프로그래밍(OOP)과 스몰토크로 어떻게 전향하는지를 간략하게 살펴보고, 주의해야 할 사항을 몇 가지 언급한 후, 책의 나머지 부분에서 다루는 주제에 대한 배경 지식을 제공할 것이다. 스몰토크 프로젝트의 사실적 관리는 마지막 장에서 좀 더 자세히 살펴볼 것이다. <br />
<br />
<br />
스몰토크에 능숙해지기 위해 고려해야 할 것은 자신의 상황, 배경, 경험, 목표, 자원 등에 따라 다를 것이다. 그 결과 이번 장은 과거에 당신과 비슷한 위치에서 다른 사람들에게 효과가 나타난 것으로 알려진 의견과 생각을 제시한다. <br />
<br />
<br />
스몰토크를 이미 사용하고 있는 사람이라도 이번 장에서 흥미로운 점을 발견할 것이다. 사실, 현재 어떤 특정한 문제가 있다면 문제가 어디서 발생했는지 발견할 수 있을지도 모른다. 하지만 이번 장이 자신에게 맞지 않다고 생각한다면 다음 장으로 넘어가도 무방하다. <br />
<br />
<br />
<br />
===스몰토크 학습 곡선===<br />
<br />
새로운 언어를 힘들이지 않고 학습하는 경우는 없다. 애석하게도 스몰토크의 경우 다른 새로운 언어에 비해 학습해야 할 것이 조금 더 많다. 객체 지향 프로그래밍을 배우고, 스몰토크 언어 자체를 배우고, VisualWorks 개발 환경을 사용하는 방법을 학습하며, 자신만의 스몰토크 프로그램을 작성하는 방식과 시스템의 코드-라이브러리에 코드를 재사용하는 방법도 배워야 할지도 모른다. 가장 중요한 것은, 자신만의 프로그래밍 문제를 해결하기 위한 완전히 새로운 방법을 학습해야 할 수도 있다는 점이다. <br />
<br />
<br />
스몰토크를 처음 이용하는 많은 사람들은 시작은 매우 열정적일지 모르나 그들이 흡수해야 하는 변화가 증가하면서 생긴 불편함으로 인해 처음 갖고 있던 열정이 급속하게 떨어진다. 스몰토크에서 프로그래밍은 다른 많은 언어에서의 프로그래밍과 다르기 때문에 너무도 자연스러운 일이다. 이러한 차이의 범위는 아래 도표에서 질적으로 표시된 특유의 스몰토크 '학습 곡선'을 야기한다. 곡선의 기울기와 길이는 물론 이전 경험, 그리고 자신의 기대치에 따라 많이 좌우된다. 스몰토크에서 프로그래밍 시 덜 불편하고 더 편안한 느낌을 가지기 위해선 2주~6개월 정도의 시간이 필요할 것이다. <br />
<br />
<br />
다행히도 불편함의 수준도 줄이고 학습 곡선의 정점까지 가는 데에 걸리는 시간을 줄이기 위한 적극적 조치가 몇 가지 있다. <br />
<br />
<br />
<br />
===문화적 충격 대비하기===<br />
<br />
스몰토크 학습 곡선, 올바른 방향으로 시작 시 납작해지고 길이가 줄어들 수 있다.<br />
<br />
[[image:ass_image_01_01.png|Running Curve]]<br />
<br />
<br />
스몰토크는 다른 프로그래밍 언어와 다르다는 점은 아무리 강조해도 지나치지 않다. OOP를 처음으로 시도한다는 이유(물론 충분한 이유가 되겠지만!) 때문만은 아니다. 스몰토크와 다른 언어들 간에 실제 관리 및 기술적 차이가 존재한다. 예를 들어, 스몰토크는 다른 언어들에 비해 훨씬 더 상호작용적이고 탐구적인 프로그래밍 스타일을 촉진하고 안전하게 지원한다. 이것이 바로 스몰토크의 생산성이 그리도 유명한 이유이다. 그렇다고 스몰토크 프로그램을 디자인할 필요가 없다는 의미는 아니다. 오히려, 스몰토크를 최상으로 이용하길 원한다면 자신이 익숙한 것보다 더 상호작용적인 디자인과 프로그래밍 스타일을 채택해야 함을 의미한다. 이러한 점이 매우 불편하게 작용할 수 있는데, 특히 전형적인 '단일 경로(single-pass)' 또는 '폭포수(waterfall)' 방법론을 이용해 시스템을 개발하는 데에 익숙하다면 더 그러할 것이다. <br />
<br />
<br />
기술적인 측면에서 보면 완전한 프로그래밍 언어인 스몰토크는 다른 언어들보다 훨씬 더 많은 일을 할 수 있다. 그렇지만 완전히 다른 방식으로 일을 진행함을 자주 발견할 것이다. 예를 들어, PC, 매킨토시 또는 유닉스 워크스테이션에 그래픽 사용자 인터페이스(GUIs)를 이용해 애플리케이션을 작성하는 데에 익숙한 사람은 스몰토크 GUIs도 똑같은 일을 수행할 수 있음을 발견할 것이다. 하지만 완전히 다른 방식으로 빌드된다 (주로 역사적 이유로 스몰토크 사용자 인터페이스는 이벤트 위주보다는 '폴링(polling)'을 통해 작업한다).<br />
<br />
<br />
이러한 유형의 차이는 포기하는 듯한 느낌이 들게끔 만들기도 하는데, 어렵게 얻은 지식과 경험 대부분이 스몰토크 환경에서 잘 사용되지 않는 것처럼 보이기 때문이다. 꼭 그럴 필요가 없다는 사실을 보이는 것 또한 이번 책의 목표에 해당한다. <br />
<br />
<br />
<br />
===작은 것부터 시작하기===<br />
<br />
첫 번째 스몰토크 연습, 작은 것을 선택하라는 말은 명백하게 보이지만 다시 언급할 가치가 있겠다. 처음 빌드하는 데에 익숙한 시스템 크기는 아무리 일부(fraction)만 빌드를 시도한다 하더라도 사서 고생하는 격이다. 작은 것부터 시작하는 편이 위에서 언급한 문화적 충격을 줄이는 데에 큰 도움이 될 것이다. '미션 크리티컬' 애플리케이션으로 바로 착수하기보다는 실험적 프로젝트에 먼저 스몰토크를 시도해보는 편이 낫다는 사실은 두말할 필요도 없다. 물론 이 모든 것은 상황에 따라 달라진다. <br />
<br />
<br />
프로그래머들로 구성된 큰 팀을 스몰토크로 전향할 계획을 세운 관리자라면, 두 명이나 세 명으로 시작하는 편이 훨씬 낫다. 그들에게 새로운 기술을 탐구할 자유를 부여하고, 학습 곡선의 정점으로 스스로 이동하도록 놔두라. 나머지 팀 구성원들이 곡선의 정점으로 오르도록 도와줄 수 있는 지역 전문가들이 될 것이다. <br />
<br />
<br />
독자가 만일 그러한 팀의 구성원이거나 혹은 스몰토크를 개별적으로 학습하고 있다면, 스스로 행할 수 있는 일들이 많이 있다. 다른 사람의 스몰토크 프로그램에 접근할 수 있는 운을 가진 사람이라면 그들의 프로그램을 단순한 방식으로 수정해보라. 그런 기회가 없다면 자신의 프로그래밍 문제의 부분 집합을 작업해보라. 예를 들어, 주요 데이터 구조 중 일부를 스몰토크 객체로서 표현해본다든가, VisualWorks GUI 개발 툴을 이용해 주요 윈도우 대화상자(window dialogue)를 빌드해보도록 한다. <br />
<br />
<br />
<br />
===상호작용적으로 탐구하고 작업하라===<br />
<br />
스몰토크를 강력하게 만드는 것들 중 하나로 대화형 프로그래밍 환경을 들 수 있다. 이러한 환경을 더 많이 활용할수록 학습 곡선을 빠르게 따르고 문화적 충격은 줄어들 것이다. 순식간에 코드의 토막(snippet)을 생성하고 실행할 수 있음을 기억하라. 이는 특정 기능이 어떻게 작용하는지 이해할 수 없을 때 이상적이다. 그것을 이해하기 위해 매뉴얼을 훑어보는 데에 시간을 허비하지 말라. 애석하게도 스몰토크는 책을 통해 학습할 수 없다. 대신 실험을 하라! 숙련된 스몰토크 프로그래머들은 초보자들에 비해 매뉴얼을 사용하는 일이 적은데, 시스템에 대해 더 많이 알아서가 아니라 그들이 알아야 하는 것을 발견하기 위해 시스템을 사용하는 방법을 학습하였기 때문이다. 이와 관련된 기술은 제 2부-스몰토크의 예술-에서 상세히 다룰 것이다. <br />
<br />
<br />
무언가가 작동하는지 시험하기 위한 실험을 준비하는 데에는 약 30분 정도 소요되지만, 결정적 답을 얻으려면 그 정도 시간은 투자할 가치가 있다. 실험할 가치가 있는 것이다. 물론 여기서도 주의사항이 있다:<br />
<br />
<br />
<br />
===코드를 버릴 준비를 하라===<br />
<br />
스몰토크에서 많은 코드를 빌드하기란 매우 쉽다-어쨌든 매우 생산적인 환경이다. 하지만 실험이 끝났다면 자신의 코드를 버릴 준비를 해야 한다. 그렇다고 해서 엄격한 실험 단계를 거친 후에 처음부터 모든 것을 다시 작성해야 한다는 의미는 아니다. 이는 무언가를 프로그래밍하는 데에 있어 첫 시도보단 두 번째 시도가 훨씬 더 나을 것이라는 사실을 이용해야 한다는 의미다. 많은 언어에서는 이것이 사실일지 모르나, 대개 이러한 이점을 활용할 수 있는 여건이 안 된다. 스몰토크에서는 두 번째 시도가 훨씬 더 나을 뿐 아니라 짧은 시간 내에 생산하도록 해줄 것이다. 그만큼의 가치가 충분히 있다. <br />
<br />
<br />
<br />
===도움을 얻어라===<br />
<br />
너무나 당연하겠지만 스몰토크의 경험이 어느 정도 있는 사람에게 도움을 요청하여 얻으면 학습 곡선을 따르고 자립성을 키우기가 훨씬 수월해질 것이다. 간단한 질문에 대한 답을 찾는 데에 몇 시간씩 들이지 않아도 될 뿐만 아니라 훨씬 쉽게 훌륭한 스몰토크 '스타일'로 곧바로 프로그래밍을 시작할 수 있을 것이다. 다시 말하지만, 처음부터 훌륭한 스몰토크 스타일을 이용하도록 돕는 것은 이 책의 목표에 해당한다. <br />
<br />
<br />
자신의 조직에서 도와줄 사람을 찾을 수도 있고, 외부에서 알아볼 수도 있다. 하지만 스몰토크는 심지어 다른 대화형 객체 지향 언어들과도 다르다는 점을 명심하라. 스몰토크 경험이 있는 사람과 작업하도록 하라. <br />
<br />
<br />
도움을 받을 수 없거나 설사 도움을 받는다 하더라도 이 책을 읽으면 스스로 학습 곡선을 따르는 데에 많은 도움이 될 것이다. 하지만 스스로를 도울 작정이라면 스몰토크 시스템을 준비하여 실행시키는 것이 최선의 방법이다. 무언가를 탐구하고 시도하는 것은 객체 지향 프로그래밍과 스몰토크의 개념에 대한 자신의 이해를 시험하는 유일한 방법으로, 이제부터 이를 논하고자 한다. <br />
<br />
<br />
<br />
===여기부터 어디까지?===<br />
<br />
스몰토크의 예술과 과학은 VisualWorks 프로그래밍 환경(을 비롯해 다른 환경들)에 기반이 되는 개발 시스템, 코드-라이브러리, 스몰토크 언어를 학습하고 이해하도록 돕기 위한 것이다. 이 책은 두 가지 부분으로 나뉜다. <br />
<br />
<br />
제1부, 스몰토크의 과학은 스몰토크 자체를 소개한다. OOP의 기본 개념을 살펴보고 스몰토크 언어를 이야기한 후 시스템 라이브러리에서 가장 중요한 클래스를 몇 가지 다룬다. 개발 환경도 간략하게 살펴보겠지만, 이 분야에서 스스로 준비하여 실행하는 데에 어느 정도 책임질 준비가 되어 있어야 한다. 자신이 정말로 이해해야 하는 시스템의 부분들에 대해 스스로 알아낼 필요가 있는 기본 지식을 제공하는 것이다. 따라서 전체 시스템을 철저하게 다루지는 않는다. <br />
<br />
<br />
이 책의 두 번째 부분은 스몰토크의 예술로, 스몰토크로 작업 시 수반되는 좀 더 까다롭고 일반적인 문제를 다룬다. 스몰토크 프로그램을 어떻게 디자인하는지를 살펴볼 것이다 (가령 객체 지향 프로그래밍을 어떻게 하는지). 이에 더해, 스몰토크에서 어떻게 코딩하는지, 어떻게 개발 환경의 기능을 최상으로 활용하는지도 고려할 것이다 (디버깅에 관한 장도 포함). <br />
<br />
<br />
바로 다음 장은 객체 입문으로, 처음부터 시작하고자 한다. 이전에 OOP의 경험이 전혀 없다면 여기부터 시작한다. 하지만 OOP가 무엇인지 (그리고 OOP를 재교육할 필요가 없다고 생각되면) 확실히 알고 있으나 스몰토크의 경험이 없다면 제 3장-스몰토크의 입문-부터 시작한다. 모두에게 행운을 빌며, 한 가지만 기억하길 바란다. 스몰토크는 즐거워야 한다. <br />
<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:TheArtandScienceofSmalltalk]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=TheArtandScienceofSmalltalk:Chapter_01&diff=5625TheArtandScienceofSmalltalk:Chapter 012022-06-14T05:00:47Z<p>Onionmixer: </p>
<hr />
<div>;제 1 장 시작을 위한 충고<br />
<br />
==시작을 위한 충고==<br />
<br />
제 1장은 스몰토크를 처음으로 사용하는 사람을 위해 작성되었다. 이에 앞서 하나의 과제를 (스몰토크를 학습하는) 하나 맡게 될 것인데, 이러한 작업은 혜택이 많은 동시 매우 불편할 수 있다. 이 책의 목표들 중 하나는 혜택은 늘이고 불편함을 줄이는 데에 있다. 제 1장의 목표는 당신이 올바른 방향으로 시작하도록 돕는 것이다. 객체 지향 프로그래밍(OOP)과 스몰토크로 어떻게 전향하는지를 간략하게 살펴보고, 주의해야 할 사항을 몇 가지 언급한 후, 책의 나머지 부분에서 다루는 주제에 대한 배경 지식을 제공할 것이다. 스몰토크 프로젝트의 사실적 관리는 마지막 장에서 좀 더 자세히 살펴볼 것이다. <br />
<br />
<br />
스몰토크에 능숙해지기 위해 고려해야 할 것은 자신의 상황, 배경, 경험, 목표, 자원 등에 따라 다를 것이다. 그 결과 이번 장은 과거에 당신과 비슷한 위치에서 다른 사람들에게 효과가 나타난 것으로 알려진 의견과 생각을 제시한다. <br />
<br />
<br />
스몰토크를 이미 사용하고 있는 사람이라도 이번 장에서 흥미로운 점을 발견할 것이다. 사실, 현재 어떤 특정한 문제가 있다면 문제가 어디서 발생했는지 발견할 수 있을지도 모른다. 하지만 이번 장이 자신에게 맞지 않다고 생각한다면 다음 장으로 넘어가도 무방하다. <br />
<br />
<br />
<br />
===스몰토크 학습 곡선===<br />
<br />
새로운 언어를 힘들이지 않고 학습하는 경우는 없다. 애석하게도 스몰토크의 경우 다른 새로운 언어에 비해 학습해야 할 것이 조금 더 많다. 객체 지향 프로그래밍을 배우고, 스몰토크 언어 자체를 배우고, VisualWorks 개발 환경을 사용하는 방법을 학습하며, 자신만의 스몰토크 프로그램을 작성하는 방식과 시스템의 코드-라이브러리에 코드를 재사용하는 방법도 배워야 할지도 모른다. 가장 중요한 것은, 자신만의 프로그래밍 문제를 해결하기 위한 완전히 새로운 방법을 학습해야 할 수도 있다는 점이다. <br />
<br />
<br />
스몰토크를 처음 이용하는 많은 사람들은 시작은 매우 열정적일지 모르나 그들이 흡수해야 하는 변화가 증가하면서 생긴 불편함으로 인해 처음 갖고 있던 열정이 급속하게 떨어진다. 스몰토크에서 프로그래밍은 다른 많은 언어에서의 프로그래밍과 다르기 때문에 너무도 자연스러운 일이다. 이러한 차이의 범위는 아래 도표에서 질적으로 표시된 특유의 스몰토크 '학습 곡선'을 야기한다. 곡선의 기울기와 길이는 물론 이전 경험, 그리고 자신의 기대치에 따라 많이 좌우된다. 스몰토크에서 프로그래밍 시 덜 불편하고 더 편안한 느낌을 가지기 위해선 2주~6개월 정도의 시간이 필요할 것이다. <br />
<br />
<br />
다행히도 불편함의 수준도 줄이고 학습 곡선의 정점까지 가는 데에 걸리는 시간을 줄이기 위한 적극적 조치가 몇 가지 있다. <br />
<br />
<br />
<br />
===문화적 충격 대비하기===<br />
<br />
스몰토크 학습 곡선, 올바른 방향으로 시작 시 납작해지고 길이가 줄어들 수 있다.<br />
<br />
[[image:ass_image_01_01.png|thumb|left|512px|Running Curve]]<br />
<br />
<br />
스몰토크는 다른 프로그래밍 언어와 다르다는 점은 아무리 강조해도 지나치지 않다. OOP를 처음으로 시도한다는 이유(물론 충분한 이유가 되겠지만!) 때문만은 아니다. 스몰토크와 다른 언어들 간에 실제 관리 및 기술적 차이가 존재한다. 예를 들어, 스몰토크는 다른 언어들에 비해 훨씬 더 상호작용적이고 탐구적인 프로그래밍 스타일을 촉진하고 안전하게 지원한다. 이것이 바로 스몰토크의 생산성이 그리도 유명한 이유이다. 그렇다고 스몰토크 프로그램을 디자인할 필요가 없다는 의미는 아니다. 오히려, 스몰토크를 최상으로 이용하길 원한다면 자신이 익숙한 것보다 더 상호작용적인 디자인과 프로그래밍 스타일을 채택해야 함을 의미한다. 이러한 점이 매우 불편하게 작용할 수 있는데, 특히 전형적인 '단일 경로(single-pass)' 또는 '폭포수(waterfall)' 방법론을 이용해 시스템을 개발하는 데에 익숙하다면 더 그러할 것이다. <br />
<br />
<br />
기술적인 측면에서 보면 완전한 프로그래밍 언어인 스몰토크는 다른 언어들보다 훨씬 더 많은 일을 할 수 있다. 그렇지만 완전히 다른 방식으로 일을 진행함을 자주 발견할 것이다. 예를 들어, PC, 매킨토시 또는 유닉스 워크스테이션에 그래픽 사용자 인터페이스(GUIs)를 이용해 애플리케이션을 작성하는 데에 익숙한 사람은 스몰토크 GUIs도 똑같은 일을 수행할 수 있음을 발견할 것이다. 하지만 완전히 다른 방식으로 빌드된다 (주로 역사적 이유로 스몰토크 사용자 인터페이스는 이벤트 위주보다는 '폴링(polling)'을 통해 작업한다).<br />
<br />
<br />
이러한 유형의 차이는 포기하는 듯한 느낌이 들게끔 만들기도 하는데, 어렵게 얻은 지식과 경험 대부분이 스몰토크 환경에서 잘 사용되지 않는 것처럼 보이기 때문이다. 꼭 그럴 필요가 없다는 사실을 보이는 것 또한 이번 책의 목표에 해당한다. <br />
<br />
<br />
<br />
===작은 것부터 시작하기===<br />
<br />
첫 번째 스몰토크 연습, 작은 것을 선택하라는 말은 명백하게 보이지만 다시 언급할 가치가 있겠다. 처음 빌드하는 데에 익숙한 시스템 크기는 아무리 일부(fraction)만 빌드를 시도한다 하더라도 사서 고생하는 격이다. 작은 것부터 시작하는 편이 위에서 언급한 문화적 충격을 줄이는 데에 큰 도움이 될 것이다. '미션 크리티컬' 애플리케이션으로 바로 착수하기보다는 실험적 프로젝트에 먼저 스몰토크를 시도해보는 편이 낫다는 사실은 두말할 필요도 없다. 물론 이 모든 것은 상황에 따라 달라진다. <br />
<br />
<br />
프로그래머들로 구성된 큰 팀을 스몰토크로 전향할 계획을 세운 관리자라면, 두 명이나 세 명으로 시작하는 편이 훨씬 낫다. 그들에게 새로운 기술을 탐구할 자유를 부여하고, 학습 곡선의 정점으로 스스로 이동하도록 놔두라. 나머지 팀 구성원들이 곡선의 정점으로 오르도록 도와줄 수 있는 지역 전문가들이 될 것이다. <br />
<br />
<br />
독자가 만일 그러한 팀의 구성원이거나 혹은 스몰토크를 개별적으로 학습하고 있다면, 스스로 행할 수 있는 일들이 많이 있다. 다른 사람의 스몰토크 프로그램에 접근할 수 있는 운을 가진 사람이라면 그들의 프로그램을 단순한 방식으로 수정해보라. 그런 기회가 없다면 자신의 프로그래밍 문제의 부분 집합을 작업해보라. 예를 들어, 주요 데이터 구조 중 일부를 스몰토크 객체로서 표현해본다든가, VisualWorks GUI 개발 툴을 이용해 주요 윈도우 대화상자(window dialogue)를 빌드해보도록 한다. <br />
<br />
<br />
<br />
===상호작용적으로 탐구하고 작업하라===<br />
<br />
스몰토크를 강력하게 만드는 것들 중 하나로 대화형 프로그래밍 환경을 들 수 있다. 이러한 환경을 더 많이 활용할수록 학습 곡선을 빠르게 따르고 문화적 충격은 줄어들 것이다. 순식간에 코드의 토막(snippet)을 생성하고 실행할 수 있음을 기억하라. 이는 특정 기능이 어떻게 작용하는지 이해할 수 없을 때 이상적이다. 그것을 이해하기 위해 매뉴얼을 훑어보는 데에 시간을 허비하지 말라. 애석하게도 스몰토크는 책을 통해 학습할 수 없다. 대신 실험을 하라! 숙련된 스몰토크 프로그래머들은 초보자들에 비해 매뉴얼을 사용하는 일이 적은데, 시스템에 대해 더 많이 알아서가 아니라 그들이 알아야 하는 것을 발견하기 위해 시스템을 사용하는 방법을 학습하였기 때문이다. 이와 관련된 기술은 제 2부-스몰토크의 예술-에서 상세히 다룰 것이다. <br />
<br />
<br />
무언가가 작동하는지 시험하기 위한 실험을 준비하는 데에는 약 30분 정도 소요되지만, 결정적 답을 얻으려면 그 정도 시간은 투자할 가치가 있다. 실험할 가치가 있는 것이다. 물론 여기서도 주의사항이 있다:<br />
<br />
<br />
<br />
===코드를 버릴 준비를 하라===<br />
<br />
스몰토크에서 많은 코드를 빌드하기란 매우 쉽다-어쨌든 매우 생산적인 환경이다. 하지만 실험이 끝났다면 자신의 코드를 버릴 준비를 해야 한다. 그렇다고 해서 엄격한 실험 단계를 거친 후에 처음부터 모든 것을 다시 작성해야 한다는 의미는 아니다. 이는 무언가를 프로그래밍하는 데에 있어 첫 시도보단 두 번째 시도가 훨씬 더 나을 것이라는 사실을 이용해야 한다는 의미다. 많은 언어에서는 이것이 사실일지 모르나, 대개 이러한 이점을 활용할 수 있는 여건이 안 된다. 스몰토크에서는 두 번째 시도가 훨씬 더 나을 뿐 아니라 짧은 시간 내에 생산하도록 해줄 것이다. 그만큼의 가치가 충분히 있다. <br />
<br />
<br />
<br />
===도움을 얻어라===<br />
<br />
너무나 당연하겠지만 스몰토크의 경험이 어느 정도 있는 사람에게 도움을 요청하여 얻으면 학습 곡선을 따르고 자립성을 키우기가 훨씬 수월해질 것이다. 간단한 질문에 대한 답을 찾는 데에 몇 시간씩 들이지 않아도 될 뿐만 아니라 훨씬 쉽게 훌륭한 스몰토크 '스타일'로 곧바로 프로그래밍을 시작할 수 있을 것이다. 다시 말하지만, 처음부터 훌륭한 스몰토크 스타일을 이용하도록 돕는 것은 이 책의 목표에 해당한다. <br />
<br />
<br />
자신의 조직에서 도와줄 사람을 찾을 수도 있고, 외부에서 알아볼 수도 있다. 하지만 스몰토크는 심지어 다른 대화형 객체 지향 언어들과도 다르다는 점을 명심하라. 스몰토크 경험이 있는 사람과 작업하도록 하라. <br />
<br />
<br />
도움을 받을 수 없거나 설사 도움을 받는다 하더라도 이 책을 읽으면 스스로 학습 곡선을 따르는 데에 많은 도움이 될 것이다. 하지만 스스로를 도울 작정이라면 스몰토크 시스템을 준비하여 실행시키는 것이 최선의 방법이다. 무언가를 탐구하고 시도하는 것은 객체 지향 프로그래밍과 스몰토크의 개념에 대한 자신의 이해를 시험하는 유일한 방법으로, 이제부터 이를 논하고자 한다. <br />
<br />
<br />
<br />
===여기부터 어디까지?===<br />
<br />
스몰토크의 예술과 과학은 VisualWorks 프로그래밍 환경(을 비롯해 다른 환경들)에 기반이 되는 개발 시스템, 코드-라이브러리, 스몰토크 언어를 학습하고 이해하도록 돕기 위한 것이다. 이 책은 두 가지 부분으로 나뉜다. <br />
<br />
<br />
제1부, 스몰토크의 과학은 스몰토크 자체를 소개한다. OOP의 기본 개념을 살펴보고 스몰토크 언어를 이야기한 후 시스템 라이브러리에서 가장 중요한 클래스를 몇 가지 다룬다. 개발 환경도 간략하게 살펴보겠지만, 이 분야에서 스스로 준비하여 실행하는 데에 어느 정도 책임질 준비가 되어 있어야 한다. 자신이 정말로 이해해야 하는 시스템의 부분들에 대해 스스로 알아낼 필요가 있는 기본 지식을 제공하는 것이다. 따라서 전체 시스템을 철저하게 다루지는 않는다. <br />
<br />
<br />
이 책의 두 번째 부분은 스몰토크의 예술로, 스몰토크로 작업 시 수반되는 좀 더 까다롭고 일반적인 문제를 다룬다. 스몰토크 프로그램을 어떻게 디자인하는지를 살펴볼 것이다 (가령 객체 지향 프로그래밍을 어떻게 하는지). 이에 더해, 스몰토크에서 어떻게 코딩하는지, 어떻게 개발 환경의 기능을 최상으로 활용하는지도 고려할 것이다 (디버깅에 관한 장도 포함). <br />
<br />
<br />
바로 다음 장은 객체 입문으로, 처음부터 시작하고자 한다. 이전에 OOP의 경험이 전혀 없다면 여기부터 시작한다. 하지만 OOP가 무엇인지 (그리고 OOP를 재교육할 필요가 없다고 생각되면) 확실히 알고 있으나 스몰토크의 경험이 없다면 제 3장-스몰토크의 입문-부터 시작한다. 모두에게 행운을 빌며, 한 가지만 기억하길 바란다. 스몰토크는 즐거워야 한다. <br />
<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:TheArtandScienceofSmalltalk]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=TheArtandScienceofSmalltalk:Chapter_01&diff=5624TheArtandScienceofSmalltalk:Chapter 012022-06-14T04:59:14Z<p>Onionmixer: </p>
<hr />
<div>;제 1 장 시작을 위한 충고<br />
<br />
==시작을 위한 충고==<br />
<br />
제 1장은 스몰토크를 처음으로 사용하는 사람을 위해 작성되었다. 이에 앞서 하나의 과제를 (스몰토크를 학습하는) 하나 맡게 될 것인데, 이러한 작업은 혜택이 많은 동시 매우 불편할 수 있다. 이 책의 목표들 중 하나는 혜택은 늘이고 불편함을 줄이는 데에 있다. 제 1장의 목표는 당신이 올바른 방향으로 시작하도록 돕는 것이다. 객체 지향 프로그래밍(OOP)과 스몰토크로 어떻게 전향하는지를 간략하게 살펴보고, 주의해야 할 사항을 몇 가지 언급한 후, 책의 나머지 부분에서 다루는 주제에 대한 배경 지식을 제공할 것이다. 스몰토크 프로젝트의 사실적 관리는 마지막 장에서 좀 더 자세히 살펴볼 것이다. <br />
<br />
<br />
스몰토크에 능숙해지기 위해 고려해야 할 것은 자신의 상황, 배경, 경험, 목표, 자원 등에 따라 다를 것이다. 그 결과 이번 장은 과거에 당신과 비슷한 위치에서 다른 사람들에게 효과가 나타난 것으로 알려진 의견과 생각을 제시한다. <br />
<br />
<br />
스몰토크를 이미 사용하고 있는 사람이라도 이번 장에서 흥미로운 점을 발견할 것이다. 사실, 현재 어떤 특정한 문제가 있다면 문제가 어디서 발생했는지 발견할 수 있을지도 모른다. 하지만 이번 장이 자신에게 맞지 않다고 생각한다면 다음 장으로 넘어가도 무방하다. <br />
<br />
<br />
<br />
===스몰토크 학습 곡선===<br />
<br />
새로운 언어를 힘들이지 않고 학습하는 경우는 없다. 애석하게도 스몰토크의 경우 다른 새로운 언어에 비해 학습해야 할 것이 조금 더 많다. 객체 지향 프로그래밍을 배우고, 스몰토크 언어 자체를 배우고, VisualWorks 개발 환경을 사용하는 방법을 학습하며, 자신만의 스몰토크 프로그램을 작성하는 방식과 시스템의 코드-라이브러리에 코드를 재사용하는 방법도 배워야 할지도 모른다. 가장 중요한 것은, 자신만의 프로그래밍 문제를 해결하기 위한 완전히 새로운 방법을 학습해야 할 수도 있다는 점이다. <br />
<br />
<br />
스몰토크를 처음 이용하는 많은 사람들은 시작은 매우 열정적일지 모르나 그들이 흡수해야 하는 변화가 증가하면서 생긴 불편함으로 인해 처음 갖고 있던 열정이 급속하게 떨어진다. 스몰토크에서 프로그래밍은 다른 많은 언어에서의 프로그래밍과 다르기 때문에 너무도 자연스러운 일이다. 이러한 차이의 범위는 아래 도표에서 질적으로 표시된 특유의 스몰토크 '학습 곡선'을 야기한다. 곡선의 기울기와 길이는 물론 이전 경험, 그리고 자신의 기대치에 따라 많이 좌우된다. 스몰토크에서 프로그래밍 시 덜 불편하고 더 편안한 느낌을 가지기 위해선 2주~6개월 정도의 시간이 필요할 것이다. <br />
<br />
<br />
다행히도 불편함의 수준도 줄이고 학습 곡선의 정점까지 가는 데에 걸리는 시간을 줄이기 위한 적극적 조치가 몇 가지 있다. <br />
<br />
<br />
<br />
===문화적 충격 대비하기===<br />
<br />
스몰토크 학습 곡선, 올바른 방향으로 시작 시 납작해지고 길이가 줄어들 수 있다.<br />
<br />
[[image:ass_image_01_01.png|thumb|512px|Running Curve]]<br />
<br />
<br />
스몰토크는 다른 프로그래밍 언어와 다르다는 점은 아무리 강조해도 지나치지 않다. OOP를 처음으로 시도한다는 이유(물론 충분한 이유가 되겠지만!) 때문만은 아니다. 스몰토크와 다른 언어들 간에 실제 관리 및 기술적 차이가 존재한다. 예를 들어, 스몰토크는 다른 언어들에 비해 훨씬 더 상호작용적이고 탐구적인 프로그래밍 스타일을 촉진하고 안전하게 지원한다. 이것이 바로 스몰토크의 생산성이 그리도 유명한 이유이다. 그렇다고 스몰토크 프로그램을 디자인할 필요가 없다는 의미는 아니다. 오히려, 스몰토크를 최상으로 이용하길 원한다면 자신이 익숙한 것보다 더 상호작용적인 디자인과 프로그래밍 스타일을 채택해야 함을 의미한다. 이러한 점이 매우 불편하게 작용할 수 있는데, 특히 전형적인 '단일 경로(single-pass)' 또는 '폭포수(waterfall)' 방법론을 이용해 시스템을 개발하는 데에 익숙하다면 더 그러할 것이다. <br />
<br />
<br />
기술적인 측면에서 보면 완전한 프로그래밍 언어인 스몰토크는 다른 언어들보다 훨씬 더 많은 일을 할 수 있다. 그렇지만 완전히 다른 방식으로 일을 진행함을 자주 발견할 것이다. 예를 들어, PC, 매킨토시 또는 유닉스 워크스테이션에 그래픽 사용자 인터페이스(GUIs)를 이용해 애플리케이션을 작성하는 데에 익숙한 사람은 스몰토크 GUIs도 똑같은 일을 수행할 수 있음을 발견할 것이다. 하지만 완전히 다른 방식으로 빌드된다 (주로 역사적 이유로 스몰토크 사용자 인터페이스는 이벤트 위주보다는 '폴링(polling)'을 통해 작업한다).<br />
<br />
<br />
이러한 유형의 차이는 포기하는 듯한 느낌이 들게끔 만들기도 하는데, 어렵게 얻은 지식과 경험 대부분이 스몰토크 환경에서 잘 사용되지 않는 것처럼 보이기 때문이다. 꼭 그럴 필요가 없다는 사실을 보이는 것 또한 이번 책의 목표에 해당한다. <br />
<br />
<br />
<br />
===작은 것부터 시작하기===<br />
<br />
첫 번째 스몰토크 연습, 작은 것을 선택하라는 말은 명백하게 보이지만 다시 언급할 가치가 있겠다. 처음 빌드하는 데에 익숙한 시스템 크기는 아무리 일부(fraction)만 빌드를 시도한다 하더라도 사서 고생하는 격이다. 작은 것부터 시작하는 편이 위에서 언급한 문화적 충격을 줄이는 데에 큰 도움이 될 것이다. '미션 크리티컬' 애플리케이션으로 바로 착수하기보다는 실험적 프로젝트에 먼저 스몰토크를 시도해보는 편이 낫다는 사실은 두말할 필요도 없다. 물론 이 모든 것은 상황에 따라 달라진다. <br />
<br />
<br />
프로그래머들로 구성된 큰 팀을 스몰토크로 전향할 계획을 세운 관리자라면, 두 명이나 세 명으로 시작하는 편이 훨씬 낫다. 그들에게 새로운 기술을 탐구할 자유를 부여하고, 학습 곡선의 정점으로 스스로 이동하도록 놔두라. 나머지 팀 구성원들이 곡선의 정점으로 오르도록 도와줄 수 있는 지역 전문가들이 될 것이다. <br />
<br />
<br />
독자가 만일 그러한 팀의 구성원이거나 혹은 스몰토크를 개별적으로 학습하고 있다면, 스스로 행할 수 있는 일들이 많이 있다. 다른 사람의 스몰토크 프로그램에 접근할 수 있는 운을 가진 사람이라면 그들의 프로그램을 단순한 방식으로 수정해보라. 그런 기회가 없다면 자신의 프로그래밍 문제의 부분 집합을 작업해보라. 예를 들어, 주요 데이터 구조 중 일부를 스몰토크 객체로서 표현해본다든가, VisualWorks GUI 개발 툴을 이용해 주요 윈도우 대화상자(window dialogue)를 빌드해보도록 한다. <br />
<br />
<br />
<br />
===상호작용적으로 탐구하고 작업하라===<br />
<br />
스몰토크를 강력하게 만드는 것들 중 하나로 대화형 프로그래밍 환경을 들 수 있다. 이러한 환경을 더 많이 활용할수록 학습 곡선을 빠르게 따르고 문화적 충격은 줄어들 것이다. 순식간에 코드의 토막(snippet)을 생성하고 실행할 수 있음을 기억하라. 이는 특정 기능이 어떻게 작용하는지 이해할 수 없을 때 이상적이다. 그것을 이해하기 위해 매뉴얼을 훑어보는 데에 시간을 허비하지 말라. 애석하게도 스몰토크는 책을 통해 학습할 수 없다. 대신 실험을 하라! 숙련된 스몰토크 프로그래머들은 초보자들에 비해 매뉴얼을 사용하는 일이 적은데, 시스템에 대해 더 많이 알아서가 아니라 그들이 알아야 하는 것을 발견하기 위해 시스템을 사용하는 방법을 학습하였기 때문이다. 이와 관련된 기술은 제 2부-스몰토크의 예술-에서 상세히 다룰 것이다. <br />
<br />
<br />
무언가가 작동하는지 시험하기 위한 실험을 준비하는 데에는 약 30분 정도 소요되지만, 결정적 답을 얻으려면 그 정도 시간은 투자할 가치가 있다. 실험할 가치가 있는 것이다. 물론 여기서도 주의사항이 있다:<br />
<br />
<br />
<br />
===코드를 버릴 준비를 하라===<br />
<br />
스몰토크에서 많은 코드를 빌드하기란 매우 쉽다-어쨌든 매우 생산적인 환경이다. 하지만 실험이 끝났다면 자신의 코드를 버릴 준비를 해야 한다. 그렇다고 해서 엄격한 실험 단계를 거친 후에 처음부터 모든 것을 다시 작성해야 한다는 의미는 아니다. 이는 무언가를 프로그래밍하는 데에 있어 첫 시도보단 두 번째 시도가 훨씬 더 나을 것이라는 사실을 이용해야 한다는 의미다. 많은 언어에서는 이것이 사실일지 모르나, 대개 이러한 이점을 활용할 수 있는 여건이 안 된다. 스몰토크에서는 두 번째 시도가 훨씬 더 나을 뿐 아니라 짧은 시간 내에 생산하도록 해줄 것이다. 그만큼의 가치가 충분히 있다. <br />
<br />
<br />
<br />
===도움을 얻어라===<br />
<br />
너무나 당연하겠지만 스몰토크의 경험이 어느 정도 있는 사람에게 도움을 요청하여 얻으면 학습 곡선을 따르고 자립성을 키우기가 훨씬 수월해질 것이다. 간단한 질문에 대한 답을 찾는 데에 몇 시간씩 들이지 않아도 될 뿐만 아니라 훨씬 쉽게 훌륭한 스몰토크 '스타일'로 곧바로 프로그래밍을 시작할 수 있을 것이다. 다시 말하지만, 처음부터 훌륭한 스몰토크 스타일을 이용하도록 돕는 것은 이 책의 목표에 해당한다. <br />
<br />
<br />
자신의 조직에서 도와줄 사람을 찾을 수도 있고, 외부에서 알아볼 수도 있다. 하지만 스몰토크는 심지어 다른 대화형 객체 지향 언어들과도 다르다는 점을 명심하라. 스몰토크 경험이 있는 사람과 작업하도록 하라. <br />
<br />
<br />
도움을 받을 수 없거나 설사 도움을 받는다 하더라도 이 책을 읽으면 스스로 학습 곡선을 따르는 데에 많은 도움이 될 것이다. 하지만 스스로를 도울 작정이라면 스몰토크 시스템을 준비하여 실행시키는 것이 최선의 방법이다. 무언가를 탐구하고 시도하는 것은 객체 지향 프로그래밍과 스몰토크의 개념에 대한 자신의 이해를 시험하는 유일한 방법으로, 이제부터 이를 논하고자 한다. <br />
<br />
<br />
<br />
===여기부터 어디까지?===<br />
<br />
스몰토크의 예술과 과학은 VisualWorks 프로그래밍 환경(을 비롯해 다른 환경들)에 기반이 되는 개발 시스템, 코드-라이브러리, 스몰토크 언어를 학습하고 이해하도록 돕기 위한 것이다. 이 책은 두 가지 부분으로 나뉜다. <br />
<br />
<br />
제1부, 스몰토크의 과학은 스몰토크 자체를 소개한다. OOP의 기본 개념을 살펴보고 스몰토크 언어를 이야기한 후 시스템 라이브러리에서 가장 중요한 클래스를 몇 가지 다룬다. 개발 환경도 간략하게 살펴보겠지만, 이 분야에서 스스로 준비하여 실행하는 데에 어느 정도 책임질 준비가 되어 있어야 한다. 자신이 정말로 이해해야 하는 시스템의 부분들에 대해 스스로 알아낼 필요가 있는 기본 지식을 제공하는 것이다. 따라서 전체 시스템을 철저하게 다루지는 않는다. <br />
<br />
<br />
이 책의 두 번째 부분은 스몰토크의 예술로, 스몰토크로 작업 시 수반되는 좀 더 까다롭고 일반적인 문제를 다룬다. 스몰토크 프로그램을 어떻게 디자인하는지를 살펴볼 것이다 (가령 객체 지향 프로그래밍을 어떻게 하는지). 이에 더해, 스몰토크에서 어떻게 코딩하는지, 어떻게 개발 환경의 기능을 최상으로 활용하는지도 고려할 것이다 (디버깅에 관한 장도 포함). <br />
<br />
<br />
바로 다음 장은 객체 입문으로, 처음부터 시작하고자 한다. 이전에 OOP의 경험이 전혀 없다면 여기부터 시작한다. 하지만 OOP가 무엇인지 (그리고 OOP를 재교육할 필요가 없다고 생각되면) 확실히 알고 있으나 스몰토크의 경험이 없다면 제 3장-스몰토크의 입문-부터 시작한다. 모두에게 행운을 빌며, 한 가지만 기억하길 바란다. 스몰토크는 즐거워야 한다. <br />
<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:TheArtandScienceofSmalltalk]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=TheArtandScienceofSmalltalk:Chapter_01&diff=5623TheArtandScienceofSmalltalk:Chapter 012022-06-14T04:52:32Z<p>Onionmixer: 이미지에 주석 추가</p>
<hr />
<div>;제 1 장 시작을 위한 충고<br />
<br />
==시작을 위한 충고==<br />
<br />
제 1장은 스몰토크를 처음으로 사용하는 사람을 위해 작성되었다. 이에 앞서 하나의 과제를 (스몰토크를 학습하는) 하나 맡게 될 것인데, 이러한 작업은 혜택이 많은 동시 매우 불편할 수 있다. 이 책의 목표들 중 하나는 혜택은 늘이고 불편함을 줄이는 데에 있다. 제 1장의 목표는 당신이 올바른 방향으로 시작하도록 돕는 것이다. 객체 지향 프로그래밍(OOP)과 스몰토크로 어떻게 전향하는지를 간략하게 살펴보고, 주의해야 할 사항을 몇 가지 언급한 후, 책의 나머지 부분에서 다루는 주제에 대한 배경 지식을 제공할 것이다. 스몰토크 프로젝트의 사실적 관리는 마지막 장에서 좀 더 자세히 살펴볼 것이다. <br />
<br />
<br />
스몰토크에 능숙해지기 위해 고려해야 할 것은 자신의 상황, 배경, 경험, 목표, 자원 등에 따라 다를 것이다. 그 결과 이번 장은 과거에 당신과 비슷한 위치에서 다른 사람들에게 효과가 나타난 것으로 알려진 의견과 생각을 제시한다. <br />
<br />
<br />
스몰토크를 이미 사용하고 있는 사람이라도 이번 장에서 흥미로운 점을 발견할 것이다. 사실, 현재 어떤 특정한 문제가 있다면 문제가 어디서 발생했는지 발견할 수 있을지도 모른다. 하지만 이번 장이 자신에게 맞지 않다고 생각한다면 다음 장으로 넘어가도 무방하다. <br />
<br />
<br />
<br />
===스몰토크 학습 곡선===<br />
<br />
새로운 언어를 힘들이지 않고 학습하는 경우는 없다. 애석하게도 스몰토크의 경우 다른 새로운 언어에 비해 학습해야 할 것이 조금 더 많다. 객체 지향 프로그래밍을 배우고, 스몰토크 언어 자체를 배우고, VisualWorks 개발 환경을 사용하는 방법을 학습하며, 자신만의 스몰토크 프로그램을 작성하는 방식과 시스템의 코드-라이브러리에 코드를 재사용하는 방법도 배워야 할지도 모른다. 가장 중요한 것은, 자신만의 프로그래밍 문제를 해결하기 위한 완전히 새로운 방법을 학습해야 할 수도 있다는 점이다. <br />
<br />
<br />
스몰토크를 처음 이용하는 많은 사람들은 시작은 매우 열정적일지 모르나 그들이 흡수해야 하는 변화가 증가하면서 생긴 불편함으로 인해 처음 갖고 있던 열정이 급속하게 떨어진다. 스몰토크에서 프로그래밍은 다른 많은 언어에서의 프로그래밍과 다르기 때문에 너무도 자연스러운 일이다. 이러한 차이의 범위는 아래 도표에서 질적으로 표시된 특유의 스몰토크 '학습 곡선'을 야기한다. 곡선의 기울기와 길이는 물론 이전 경험, 그리고 자신의 기대치에 따라 많이 좌우된다. 스몰토크에서 프로그래밍 시 덜 불편하고 더 편안한 느낌을 가지기 위해선 2주~6개월 정도의 시간이 필요할 것이다. <br />
<br />
<br />
다행히도 불편함의 수준도 줄이고 학습 곡선의 정점까지 가는 데에 걸리는 시간을 줄이기 위한 적극적 조치가 몇 가지 있다. <br />
<br />
<br />
<br />
===문화적 충격 대비하기===<br />
<br />
스몰토크 학습 곡선, 올바른 방향으로 시작 시 납작해지고 길이가 줄어들 수 있다.<br />
<br />
[[image:ass_image_01_01.png|512px|Running Curve]]<br />
<br />
<br />
스몰토크는 다른 프로그래밍 언어와 다르다는 점은 아무리 강조해도 지나치지 않다. OOP를 처음으로 시도한다는 이유(물론 충분한 이유가 되겠지만!) 때문만은 아니다. 스몰토크와 다른 언어들 간에 실제 관리 및 기술적 차이가 존재한다. 예를 들어, 스몰토크는 다른 언어들에 비해 훨씬 더 상호작용적이고 탐구적인 프로그래밍 스타일을 촉진하고 안전하게 지원한다. 이것이 바로 스몰토크의 생산성이 그리도 유명한 이유이다. 그렇다고 스몰토크 프로그램을 디자인할 필요가 없다는 의미는 아니다. 오히려, 스몰토크를 최상으로 이용하길 원한다면 자신이 익숙한 것보다 더 상호작용적인 디자인과 프로그래밍 스타일을 채택해야 함을 의미한다. 이러한 점이 매우 불편하게 작용할 수 있는데, 특히 전형적인 '단일 경로(single-pass)' 또는 '폭포수(waterfall)' 방법론을 이용해 시스템을 개발하는 데에 익숙하다면 더 그러할 것이다. <br />
<br />
<br />
기술적인 측면에서 보면 완전한 프로그래밍 언어인 스몰토크는 다른 언어들보다 훨씬 더 많은 일을 할 수 있다. 그렇지만 완전히 다른 방식으로 일을 진행함을 자주 발견할 것이다. 예를 들어, PC, 매킨토시 또는 유닉스 워크스테이션에 그래픽 사용자 인터페이스(GUIs)를 이용해 애플리케이션을 작성하는 데에 익숙한 사람은 스몰토크 GUIs도 똑같은 일을 수행할 수 있음을 발견할 것이다. 하지만 완전히 다른 방식으로 빌드된다 (주로 역사적 이유로 스몰토크 사용자 인터페이스는 이벤트 위주보다는 '폴링(polling)'을 통해 작업한다).<br />
<br />
<br />
이러한 유형의 차이는 포기하는 듯한 느낌이 들게끔 만들기도 하는데, 어렵게 얻은 지식과 경험 대부분이 스몰토크 환경에서 잘 사용되지 않는 것처럼 보이기 때문이다. 꼭 그럴 필요가 없다는 사실을 보이는 것 또한 이번 책의 목표에 해당한다. <br />
<br />
<br />
<br />
===작은 것부터 시작하기===<br />
<br />
첫 번째 스몰토크 연습, 작은 것을 선택하라는 말은 명백하게 보이지만 다시 언급할 가치가 있겠다. 처음 빌드하는 데에 익숙한 시스템 크기는 아무리 일부(fraction)만 빌드를 시도한다 하더라도 사서 고생하는 격이다. 작은 것부터 시작하는 편이 위에서 언급한 문화적 충격을 줄이는 데에 큰 도움이 될 것이다. '미션 크리티컬' 애플리케이션으로 바로 착수하기보다는 실험적 프로젝트에 먼저 스몰토크를 시도해보는 편이 낫다는 사실은 두말할 필요도 없다. 물론 이 모든 것은 상황에 따라 달라진다. <br />
<br />
<br />
프로그래머들로 구성된 큰 팀을 스몰토크로 전향할 계획을 세운 관리자라면, 두 명이나 세 명으로 시작하는 편이 훨씬 낫다. 그들에게 새로운 기술을 탐구할 자유를 부여하고, 학습 곡선의 정점으로 스스로 이동하도록 놔두라. 나머지 팀 구성원들이 곡선의 정점으로 오르도록 도와줄 수 있는 지역 전문가들이 될 것이다. <br />
<br />
<br />
독자가 만일 그러한 팀의 구성원이거나 혹은 스몰토크를 개별적으로 학습하고 있다면, 스스로 행할 수 있는 일들이 많이 있다. 다른 사람의 스몰토크 프로그램에 접근할 수 있는 운을 가진 사람이라면 그들의 프로그램을 단순한 방식으로 수정해보라. 그런 기회가 없다면 자신의 프로그래밍 문제의 부분 집합을 작업해보라. 예를 들어, 주요 데이터 구조 중 일부를 스몰토크 객체로서 표현해본다든가, VisualWorks GUI 개발 툴을 이용해 주요 윈도우 대화상자(window dialogue)를 빌드해보도록 한다. <br />
<br />
<br />
<br />
===상호작용적으로 탐구하고 작업하라===<br />
<br />
스몰토크를 강력하게 만드는 것들 중 하나로 대화형 프로그래밍 환경을 들 수 있다. 이러한 환경을 더 많이 활용할수록 학습 곡선을 빠르게 따르고 문화적 충격은 줄어들 것이다. 순식간에 코드의 토막(snippet)을 생성하고 실행할 수 있음을 기억하라. 이는 특정 기능이 어떻게 작용하는지 이해할 수 없을 때 이상적이다. 그것을 이해하기 위해 매뉴얼을 훑어보는 데에 시간을 허비하지 말라. 애석하게도 스몰토크는 책을 통해 학습할 수 없다. 대신 실험을 하라! 숙련된 스몰토크 프로그래머들은 초보자들에 비해 매뉴얼을 사용하는 일이 적은데, 시스템에 대해 더 많이 알아서가 아니라 그들이 알아야 하는 것을 발견하기 위해 시스템을 사용하는 방법을 학습하였기 때문이다. 이와 관련된 기술은 제 2부-스몰토크의 예술-에서 상세히 다룰 것이다. <br />
<br />
<br />
무언가가 작동하는지 시험하기 위한 실험을 준비하는 데에는 약 30분 정도 소요되지만, 결정적 답을 얻으려면 그 정도 시간은 투자할 가치가 있다. 실험할 가치가 있는 것이다. 물론 여기서도 주의사항이 있다:<br />
<br />
<br />
<br />
===코드를 버릴 준비를 하라===<br />
<br />
스몰토크에서 많은 코드를 빌드하기란 매우 쉽다-어쨌든 매우 생산적인 환경이다. 하지만 실험이 끝났다면 자신의 코드를 버릴 준비를 해야 한다. 그렇다고 해서 엄격한 실험 단계를 거친 후에 처음부터 모든 것을 다시 작성해야 한다는 의미는 아니다. 이는 무언가를 프로그래밍하는 데에 있어 첫 시도보단 두 번째 시도가 훨씬 더 나을 것이라는 사실을 이용해야 한다는 의미다. 많은 언어에서는 이것이 사실일지 모르나, 대개 이러한 이점을 활용할 수 있는 여건이 안 된다. 스몰토크에서는 두 번째 시도가 훨씬 더 나을 뿐 아니라 짧은 시간 내에 생산하도록 해줄 것이다. 그만큼의 가치가 충분히 있다. <br />
<br />
<br />
<br />
===도움을 얻어라===<br />
<br />
너무나 당연하겠지만 스몰토크의 경험이 어느 정도 있는 사람에게 도움을 요청하여 얻으면 학습 곡선을 따르고 자립성을 키우기가 훨씬 수월해질 것이다. 간단한 질문에 대한 답을 찾는 데에 몇 시간씩 들이지 않아도 될 뿐만 아니라 훨씬 쉽게 훌륭한 스몰토크 '스타일'로 곧바로 프로그래밍을 시작할 수 있을 것이다. 다시 말하지만, 처음부터 훌륭한 스몰토크 스타일을 이용하도록 돕는 것은 이 책의 목표에 해당한다. <br />
<br />
<br />
자신의 조직에서 도와줄 사람을 찾을 수도 있고, 외부에서 알아볼 수도 있다. 하지만 스몰토크는 심지어 다른 대화형 객체 지향 언어들과도 다르다는 점을 명심하라. 스몰토크 경험이 있는 사람과 작업하도록 하라. <br />
<br />
<br />
도움을 받을 수 없거나 설사 도움을 받는다 하더라도 이 책을 읽으면 스스로 학습 곡선을 따르는 데에 많은 도움이 될 것이다. 하지만 스스로를 도울 작정이라면 스몰토크 시스템을 준비하여 실행시키는 것이 최선의 방법이다. 무언가를 탐구하고 시도하는 것은 객체 지향 프로그래밍과 스몰토크의 개념에 대한 자신의 이해를 시험하는 유일한 방법으로, 이제부터 이를 논하고자 한다. <br />
<br />
<br />
<br />
===여기부터 어디까지?===<br />
<br />
스몰토크의 예술과 과학은 VisualWorks 프로그래밍 환경(을 비롯해 다른 환경들)에 기반이 되는 개발 시스템, 코드-라이브러리, 스몰토크 언어를 학습하고 이해하도록 돕기 위한 것이다. 이 책은 두 가지 부분으로 나뉜다. <br />
<br />
<br />
제1부, 스몰토크의 과학은 스몰토크 자체를 소개한다. OOP의 기본 개념을 살펴보고 스몰토크 언어를 이야기한 후 시스템 라이브러리에서 가장 중요한 클래스를 몇 가지 다룬다. 개발 환경도 간략하게 살펴보겠지만, 이 분야에서 스스로 준비하여 실행하는 데에 어느 정도 책임질 준비가 되어 있어야 한다. 자신이 정말로 이해해야 하는 시스템의 부분들에 대해 스스로 알아낼 필요가 있는 기본 지식을 제공하는 것이다. 따라서 전체 시스템을 철저하게 다루지는 않는다. <br />
<br />
<br />
이 책의 두 번째 부분은 스몰토크의 예술로, 스몰토크로 작업 시 수반되는 좀 더 까다롭고 일반적인 문제를 다룬다. 스몰토크 프로그램을 어떻게 디자인하는지를 살펴볼 것이다 (가령 객체 지향 프로그래밍을 어떻게 하는지). 이에 더해, 스몰토크에서 어떻게 코딩하는지, 어떻게 개발 환경의 기능을 최상으로 활용하는지도 고려할 것이다 (디버깅에 관한 장도 포함). <br />
<br />
<br />
바로 다음 장은 객체 입문으로, 처음부터 시작하고자 한다. 이전에 OOP의 경험이 전혀 없다면 여기부터 시작한다. 하지만 OOP가 무엇인지 (그리고 OOP를 재교육할 필요가 없다고 생각되면) 확실히 알고 있으나 스몰토크의 경험이 없다면 제 3장-스몰토크의 입문-부터 시작한다. 모두에게 행운을 빌며, 한 가지만 기억하길 바란다. 스몰토크는 즐거워야 한다. <br />
<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:TheArtandScienceofSmalltalk]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=WikiReport&diff=5619WikiReport2021-12-17T01:11:31Z<p>Onionmixer: 번역 위키 작업 진행현황 페이지 내용 추가</p>
<hr />
<div>==smalltalk 관련==<br />
<br />
* Computer Programming with GNU Smalltalk - 135 페이지<br />
* Design Pattern Smalltalk Companion - 374 페이지<br />
* Squeak by Example - 380 페이지<br />
* Deep Into Pharo - 384 페이지<br />
* Smalltalk Best Practice Patterns - 157 페이지<br />
* Smalltalk Objects and Design - 279 페이지<br />
* The Art and Science of Smalltalk - 179 페이지<br />
* Smalltalk-80 Language Implementation - 대략 700 페이지<br />
* gnu Smalltalk User’s Guide - 대략 300 페이지<br />
* Cincom Smalltalk Online Document - 대략 50 페이지<br />
<br />
<br />
==pascal 관련==<br />
<br />
* Start programming using ObjectPascal - 143 페이지<br />
* Lazarus Complete Guide - 659 페이지<br />
<br />
<br />
==gnome 관련==<br />
<br />
* Foundations of GTK Development - 655 페이지<br />
* GNOME3 Application Development Beginners Guide - 366 페이지<br />
<br />
<br />
==php 관련==<br />
<br />
* phpunit manual - 209 페이지<br />
<br />
<br />
==기타문서들==<br />
<br />
* Extending the Squeak Virtual Machine<br />
* Cincom Smalltalk Online Document<br />
* Introduction to Design Patterns in Delphi<br />
* More Design Patterns<br />
<br />
기타문서를 제외하면 4970 페이지</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=Main_Page&diff=5618Main Page2021-12-17T01:10:28Z<p>Onionmixer: 번역 위키 작업 진행현황 링크 추가</p>
<hr />
<div>==흡혈양파의 번역工房 입니다!==<br />
<br />
이 wiki에서는 저와함께 이런저런 문서작업을 하는 분들과 결과 및 검수등을 위해 만들어져있습니다.<br />
<br />
<br />
;기타<br />
<br />
* [[Book|생성된 ebook 목록]]<br />
* [http://trans.onionmixer.net/mediawiki/index.php?title=%EB%B6%84%EB%A5%98:%EC%B1%85 생성된 ebook목록-자동]<br />
* [http://trans.onionmixer.net/mediawiki/index.php?title=%ED%8A%B9%EC%88%98%EA%B8%B0%EB%8A%A5:%EC%A0%91%EB%91%90%EC%96%B4%EC%B0%BE%EA%B8%B0&prefix=%ED%9D%A1%ED%98%88%EC%96%91%ED%8C%8C%EC%9D%98+%EC%9D%B8%ED%84%B0%EB%84%B7%E5%B7%A5%E6%88%BF:%EC%B1%85/ 또다른 생성된 ebook목록-자동]<br />
<br />
<br />
<br />
;smalltalk 번역<br />
<br />
* [[Smalltalk_Translation_Dictionary|Smalltalk 번역 용어사전]]<br />
<br />
<br />
* [[SqueakByExmaple|Squeak By Exmaple]]<br />
* [[ExtendingtheSqueakVirtualMachine|Extending the Squeak Virtual Machine]]<br />
* [[DeepintoPharo|Deep into Pharo(ESUG 2013 Edition)]]<br />
<br />
<br />
* [[TheArtandScienceofSmalltalk|The Art and Science of Smalltalk]]<br />
<br />
<br />
* [[SmalltalkObjectsandDesign|Smalltalk Objects and Design]]<br />
* [[SmalltalkBestPracticePatterns|Smalltalk Best Practice Patterns]]<br />
* [[DesignPatternSmalltalkCompanion|The Design Patterns Smalltalk Companion]]<br />
<br />
<br />
* [[ComputerProgrammingwithGNUSmalltalk|Computer Programming with GNU Smalltalk]]<br />
* [[gnuSmalltalkUsersGuide|gnu Smalltalk User’s Guide]]<br />
<br />
<br />
* [[CincomSmalltalk|Cincom Smalltalk Online Document]]<br />
<br />
<br />
* [[Smalltalk80LanguageImplementation|SMALLTALK-80/The Language and Its Implementation English]]<br />
* [[Smalltalk80LanguageImplementationKor|SMALLTALK-80/The Language and Its Implementation Korean]]<br />
<br />
<br />
* [[TheSpecUIframework|The Spec UI framework]]<br />
<br />
<br />
<br />
<br />
;Delphi, FreePascal, Lazarus번역<br />
<br />
* [[StartprogrammingusingObjectPascal|Start programming using ObjectPascal]]<br />
* [[LazarusCompleteGuide|Lazarus Complete Guide]]<br />
<br />
<br />
* [[DesignPatternDelphi|Introduction to Design Patterns in Delphi]]<br />
* [[MoreDesignPatterns|More Design Patterns]]<br />
<br />
<br />
* [[LazarusLicense|Lazarus license관련 번역들]]<br />
<br />
<br />
<br />
;gtk+2.x, gnome 3<br />
<br />
* [[FoundationsofGTKDevelopment|Foundations of GTK Development]]<br />
* [[GNOME3_Application_Development_Beginners_Guide|GNOME3 Application Development Beginners Guide]]<br />
<br />
<br />
<br />
;이론서적 번역<br />
<br />
* [[PHPUnit_Manual|PHPUnit 메뉴얼]]<br />
<br />
<br />
;기타 번역<br />
<br />
* [[GNUEMACS_Manual|GNU Emacs 메뉴얼(24.5)]]<br />
* [[NeXTSTEP_DriverKit|NeXTSTEP Driver Kit]]<br />
* [[RaSCSI_Document|RaSCSI Document (1.32)]]<br />
* [[redmine_git_hosting_get_started|redmine git hosting / get started 문서번역]]<br />
<br />
<br />
<br />
;운영테스트<br />
<br />
* [[WikiTestPage|위키문법테스트페이지]]<br />
* [[WikiTips|위키운영팁페이지]]<br />
* [[WikiReport|번역 위키 작업 진행현황]]<br />
<br />
<br />
<br />
Consult the [https://www.mediawiki.org/wiki/Special:MyLanguage/Help:Contents User's Guide] for information on using the wiki software.<br />
<br />
== Getting started ==<br />
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]<br />
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]<br />
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]<br />
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Localisation#Translation_resources Localise MediaWiki for your language]<br />
* [https://www.mediawiki.org/wiki/Special:MyLanguage/Manual:Combating_spam Learn how to combat spam on your wiki]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=TheSpecUIframework&diff=5617TheSpecUIframework2021-10-15T08:18:35Z<p>Onionmixer: The Spec Handbook 링크 추가</p>
<hr />
<div>;The Spec UI framework<br />
<br />
원본-영어<br><br />
https://books.pharo.org/spec-tutorial/<br />
<br />
<br />
번역진행<br><br />
'''Google Translation Service'''<br />
<br />
<br />
검수진행<br><br />
'''없음'''<br />
<br />
참고문서<br><br />
<br />
* '''https://rakshit-p.medium.com/building-a-simple-application-with-spec2-36fa4b0ffb38''' - 20211015<br />
* https://github.com/pharo-spec/Spec/blob/Pharo10/spec2.md - The Spec Handbook<br />
<br />
----<br />
===The Spec UI framework===<br />
<br />
'''번역관련 내용'''<br />
<br />
* [[:TheSpecUIframework:transdic|번역관련 기타내용]]<br />
<br />
<br />
===Book===<br />
<br />
* [[:TheSpecUIframework:Contents|목차]]<br />
<br />
<br />
* [[:TheSpecUIframework:Chapter_01|Chapter 01 서문]]<br />
* [[:TheSpecUIframework:Chapter_02|Chapter 02 처음 사용 및 예제]]<br />
* [[:TheSpecUIframework:Chapter_03|Chapter 03 요소(elements)의 재사용 및 구성]]<br />
* [[:TheSpecUIframework:Chapter_04|Chapter 04 Spec 의 기본 사항]]<br />
* [[:TheSpecUIframework:Chapter_05|Chapter 05 레이아웃 구성]]<br />
* [[:TheSpecUIframework:Chapter_06|Chapter 06 창 관리하기]]<br />
* [[:TheSpecUIframework:Chapter_07|Chapter 07 고급 위젯]]<br />
* [[:TheSpecUIframework:Chapter_08|Chapter 08 동적 Spec]]<br />
* [[:TheSpecUIframework:Chapter_09|Chapter 09 Tips 과 Tricks]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=TheSpecUIframework&diff=5616TheSpecUIframework2021-10-15T08:16:31Z<p>Onionmixer: 참고문서 link 추가</p>
<hr />
<div>;The Spec UI framework<br />
<br />
원본-영어<br><br />
https://books.pharo.org/spec-tutorial/<br />
<br />
<br />
번역진행<br><br />
'''Google Translation Service'''<br />
<br />
<br />
검수진행<br><br />
'''없음'''<br />
<br />
참고문서<br><br />
'''https://rakshit-p.medium.com/building-a-simple-application-with-spec2-36fa4b0ffb38''' - 20211015<br />
<br />
----<br />
===The Spec UI framework===<br />
<br />
'''번역관련 내용'''<br />
<br />
* [[:TheSpecUIframework:transdic|번역관련 기타내용]]<br />
<br />
<br />
===Book===<br />
<br />
* [[:TheSpecUIframework:Contents|목차]]<br />
<br />
<br />
* [[:TheSpecUIframework:Chapter_01|Chapter 01 서문]]<br />
* [[:TheSpecUIframework:Chapter_02|Chapter 02 처음 사용 및 예제]]<br />
* [[:TheSpecUIframework:Chapter_03|Chapter 03 요소(elements)의 재사용 및 구성]]<br />
* [[:TheSpecUIframework:Chapter_04|Chapter 04 Spec 의 기본 사항]]<br />
* [[:TheSpecUIframework:Chapter_05|Chapter 05 레이아웃 구성]]<br />
* [[:TheSpecUIframework:Chapter_06|Chapter 06 창 관리하기]]<br />
* [[:TheSpecUIframework:Chapter_07|Chapter 07 고급 위젯]]<br />
* [[:TheSpecUIframework:Chapter_08|Chapter 08 동적 Spec]]<br />
* [[:TheSpecUIframework:Chapter_09|Chapter 09 Tips 과 Tricks]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=TheSpecUIframework&diff=5615TheSpecUIframework2021-10-15T08:15:41Z<p>Onionmixer: pdf 문서 link 수정</p>
<hr />
<div>;The Spec UI framework<br />
<br />
원본-영어<br><br />
https://books.pharo.org/spec-tutorial/<br />
<br />
<br />
번역진행<br><br />
'''Google Translation Service'''<br />
<br />
<br />
검수진행<br><br />
'''없음'''<br />
<br />
----<br />
===The Spec UI framework===<br />
<br />
'''번역관련 내용'''<br />
<br />
* [[:TheSpecUIframework:transdic|번역관련 기타내용]]<br />
<br />
<br />
===Book===<br />
<br />
* [[:TheSpecUIframework:Contents|목차]]<br />
<br />
<br />
* [[:TheSpecUIframework:Chapter_01|Chapter 01 서문]]<br />
* [[:TheSpecUIframework:Chapter_02|Chapter 02 처음 사용 및 예제]]<br />
* [[:TheSpecUIframework:Chapter_03|Chapter 03 요소(elements)의 재사용 및 구성]]<br />
* [[:TheSpecUIframework:Chapter_04|Chapter 04 Spec 의 기본 사항]]<br />
* [[:TheSpecUIframework:Chapter_05|Chapter 05 레이아웃 구성]]<br />
* [[:TheSpecUIframework:Chapter_06|Chapter 06 창 관리하기]]<br />
* [[:TheSpecUIframework:Chapter_07|Chapter 07 고급 위젯]]<br />
* [[:TheSpecUIframework:Chapter_08|Chapter 08 동적 Spec]]<br />
* [[:TheSpecUIframework:Chapter_09|Chapter 09 Tips 과 Tricks]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=MastringCmakeVersion31:Chapter_02&diff=5614MastringCmakeVersion31:Chapter 022020-09-21T12:10:49Z<p>Onionmixer: 형식 오류 수정</p>
<hr />
<div>==CHAPTER TWO : GETTING STARTED==<br />
<br />
===Getting and Installing CMake on Your Computer===<br />
<br />
Before using CMake, you will need to install or build the CMake binaries on your system. On many systems, you may find that CMake is already installed or is available for install with the standard package manager tool for the system. Cygwin, Debian, FreeBSD, OS X MacPorts, Mac OS X Fink, and many others all have CMake distributions. If your system does not have a CMake package, you can find CMake precompiled for many common architectures at www.cmake.org. If you do not find precompiled binaries for your system, then you can build CMake from source. To build CMake, you will need a modern C++ compiler.<br />
<br />
====UNIX and Mac Binary Installations====<br />
<br />
If your system provides CMake as one of its standard packages, follow your system's package installation instructions. If your system does not have CMake, or has an out-of-date version of CMake, you can down load precompiled binaries from www.cmake.org. The binaries from www.cmake.org come in the form of a compressed .tar file. To install , simply extract the compressed .tar file into a destination directory such as /usr/local. Any directory is allowed, so CMake does not require root privileges for installation.<br />
<br />
====Windows Binary Installation====<br />
<br />
For Windows, CMake provides an installer executable available for download from www.cmake.org. To install this file, simply run the executable on the Windows machine where you want to install CMake. You will be able to run CMake from the Start Menu or from the command line after it is installed.<br />
<br />
===Building CMake Yourself===<br />
<br />
If binaries are not available for your system, or if binaries are not available for the version of CMake you wish to use, you can build CMake from the source code. You can obtain the CMake source code from the www.cmake.org download page. Once you have the source code, it can be built in two different ways. If you have a version of CMake on your system, you can use it to build other versions of CMake. The current development version of CMake can generally be built from the previous release of CMake. This is how new versions of CMake are built on most Windows systems.<br />
<br />
The second way to build CMake is by running its bootstrap build script. To do this, change directory into your CMake source directory and type:<br />
<br />
<syntaxhighlight lang="text"><br />
./bootstrap<br />
make<br />
make install<br />
</syntaxhighlight><br />
<br />
The make install step is optional since CMake can run directly from the build directory if desired. On UNIX, if you are not using the system's C++ compiler, you need to tell the bootstrap script which compiler you want to use. This is done by setting the environment variable CXX before running bootstrap. If you need to use any special flags with your compiler, set the CXXFLAGS environment variable. For example, on the SGI with the 7.3X compiler, you would build CMake like this:<br />
<br />
<syntaxhighlight lang="text"><br />
cd CMake<br />
(setenv CXX CC; setenv CXXFLAGS "-LANG:std"; ./bootstrap)<br />
make<br />
make install<br />
</syntaxhighlight><br />
<br />
===Basic CMake Usage and Syntax===<br />
<br />
Using CMake is simple. The build process is controlled by creating one-or-more CMakeLists files (actually CMakeLists.txt but this guide will leave off the extension in most cases) in each of the directories that make up a project. The CMakeLists files contain the project description in CMake's simple language. The language is expressed as a series of comments and commands. Comments start with # and run to the end of the line. Commands have the form<br />
<br />
<syntaxhighlight lang="text"><br />
command (args...)<br />
</syntaxhighlight><br />
<br />
where command is the name of the command, and args is a whitespace-separated list of arguments. Each command is evaluated in the order that it appears in the CMakeLists file. CMake is no longer case insensitive to command names as of version 2.2, so where you see command, you could use COMMAND or C ommand instead. Older versions of CMake only accepted uppercase commands.<br />
<br />
<syntaxhighlight lang="text"><br />
command ("") # 1 quoted argument<br />
command ("a b c") # 1 quoted argument<br />
command ("a;b;c") # 1 quoted argument<br />
command ("a" "b" "c") # 3 quoted arguments<br />
command (a b c) # 3 unquoted arguments<br />
command (a;b;c) # 1 unquoted arguments expands to 3<br />
</syntaxhighlight><br />
<br />
CMake supports simple variables storing strings. Use the set() (page 330) command to set variable values. In its simplest form, the first argument to set is the name of the variable and the rest of the arguments are the values. Multiple value arguments are packed into a semicolon-separated list and stored in the variable as a string. For example:<br />
<br />
<syntaxhighlight lang="text"><br />
set (Foo "") # 1 quoted arg -> value is ""<br />
set (Foo a) # 1 unquoted arg -> value is "a"<br />
set (Foo "a b c") # 1 quoted arg -> value is "a b c"<br />
set (Foo a b c) # 3 unquoted args -> value is "a;b;c"<br />
</syntaxhighlight><br />
<br />
Variables may be referenced in command arguments using syntax ${VAR} where VAR is the variable name. If the named variable is not defined, the reference is replaced with an empty string; otherwise it is replaced by the value of the variable. Replacement is performed prior to the expansion of unquoted arguments, so variable values containing semicolons are split into zero-or-more arguments in place of the original unquoted<br />
argument. For example:<br />
<br />
<syntaxhighlight lang="text"><br />
set (Foo a b c) # 3 unquoted args -> value is "a;b;c"<br />
command(${Foo}) # unquoted arg replaced by a;b;c<br />
# and expands to three arguments<br />
command("${Foo}") # quoted arg value is "a;b;c;"<br />
set (Foo "") # 1 quoted arg -> value is empty string<br />
command(${Foo}) # unquoted arg replaced by empty string<br />
# and expands to zero arguments<br />
command(${Foo}) # quoted arg value is empty string<br />
</syntaxhighlight><br />
<br />
System environment variables and Windows registry values can be accessed directly in CMake. To access system environment variables, use the syntax $ENV{VAR}. CMake can also reference registry entries in many commands using a syntax of the form [HKEY_CURRENT_USER\\Software\\path1\\path2;key], where the paths are built from the registry tree and key.<br />
<br />
<br />
===Hello World for CMake===<br />
<br />
For starters, let us consider the simplest possible CMakeLists file. To compile an executable from one source<br />
file, the CMakeLists file would contain two lines:<br />
<br />
<syntaxhighlight lang="text"><br />
project (Hello)<br />
add_executable (Hello Hello.c)<br />
</syntaxhighlight><br />
<br />
To build the Hello executable, follow the process described in Running CMake (See section 0) to generate the build fi les. The project() (page 327) command indicates what the name of the resulting workspace should be and the add_executable() (page 273) command adds an executable target to the build process. That's all there is to it for this simple example. If your project requires a few files, it is also quite easy to modify with the add_executable line as shown below.<br />
<br />
<syntaxhighlight lang="text"><br />
add_executable (Hello Hello.c File2.c File3.c File4.c)<br />
</syntaxhighlight><br />
<br />
add_executable is just one of many commands available in CMake. Consider the more complicated example below.<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required (2.6)<br />
project (HELLO)<br />
<br />
set (HELLO_SRCS Hello.c File2.c File3.c)<br />
<br />
if (WIN32)<br />
set (HELLO_SRCS ${HELLO_SRCS} wWinSupport .c)<br />
else ()<br />
set (HELLO_SRCS ${HELLO_SRCS} UnixSupport .c)<br />
endif ()<br />
<br />
add_executable (Hello ${HELLO_SRCS))<br />
<br />
# look for the Tcl library<br />
find_library (TCL_LIBRARY<br />
NAMES tcl tcl84 tcl183 tcl82 tcl80<br />
PATHS /opt/TclTk/lib c:/TelTk/lib<br />
)<br />
<br />
if (TCL_LIBRARY)<br />
target_link_library (Hello ${TCL_LIBRARY}}<br />
endif ()<br />
</syntaxhighlight><br />
<br />
In this example, the set() (page 330) command is used to group together source files into a list. The if() (page 313) command is used to add either WinSupport.c or UnixSupport.c to this list based on whether or not CMake is running on Windows. Finally, the add_executable() (page 273) command is used to build the executable with the files listed in the variable HELLO_SRCS. The find_library() (page 294) command looks for the Tcl library under a few different names and in a few different paths. An if command checks if the TCL_LIBRARY was found, and if so, adds it to the link line for the Hello executable target.<br />
<br />
<br />
===How to Run CMake?===<br />
<br />
Once CMake has been installed on your system, using it to build a project is easy. There are two main directories CMake uses when building a project: the source directory and the binary directory. The source directory is where the source code for your project is located. This is also where the CMakeLists files will be found. The binary directory is where you want CMake to put the resulting object files, libraries, and executables. CMake will not write any files to the source directory, only to the binary directory. We encourage use of "out-of-source" builds in which the source and binary directories are different, but one may also perform "in-source" builds in which the source and binary directories are the same.<br />
<br />
CMake supports both in-source and out-of-source builds on all operating systems. This means that you can configure your build to be completely outside of the source code tree, which makes it very easy to remove all of the files generated by a build. Having the build tree differ from the source tree also makes it easy to support having multiple builds of a single source tree. This is useful when you want to have multiple builds with different options but just one copy of the source code. Now let us consider the specifics of running CMake using its Qt-based GUI and command line interfaces.<br />
<br />
====Running CMake's Qt Interface====<br />
<br />
CMake includes a Qt-based user interface that can be used on most platforms, including UNIX, Mac OS X, and Windows. This interface is included in the CMake source code, but you will need an installation of Qt on your system in order to build it.<br />
<br />
<<Figure 2.1 : Qt based CMake GUI>><br />
<br />
On Windows, the executable is named cmake-gui.exe and it should be in your Start menu under Program Files. There may also be a shortcut on your desktop, or if you bui lt CMake from the source, it will be in the build directory. For UNIX and Mac users, the executable is named cmake-gui and it can be found where you installed the CMake executables. A GUI will appear similar to what is shown in Figure 2.1. The top two fields are the source code and binary directories. They allow you to specify where the source code is located for what you want to compile, and where the resulting binaries should be placed. You should set these two values first. If the binary directory you specify does not exist, it will be created for you. If the binary directory has been configured by CMake before, it will then automatically set the source tree.<br />
<br />
The middle area is where you can specify different options for the build process. More obscure variables may be hidden, but can be seen if you select "Advanced View" from the view pulldown. You can search for values in the middle area by typing all or part of the name into the search box. This can be handy for finding specific settings or options in a large project. The bottom area of the window includes the Configure and Generate<br />
buttons as well as a progress bar and scrollable output window.<br />
<br />
Once you have specified the source code and binary directories, click the Configure button. This will cause CMake to read in the CMakeLists files from the source code directory and update the cache area to display any new options for the project. If you are running cmake-gui for the first time on this binary directory it will prompt you to determine which generator you wish to use, as shown in Figure 2.2. This dialog also presents options for customjzing and tweaking the compilers you wish to use for the build.<br />
<br />
After the first configure, you can adj ust the cache settings if desired and click the Configure button again. New values that were created by the configure process will be colored red. To be sure you have seen all possible values, click Configure until none of the values are red and you are happy with all the settings. Once you are done configuring, click the Generate button to produce the appropriate files.<br />
<br />
It is important that you make sure that your environment is suitable for running cmake-gui. If you are using an IDE such as Visual Studio, your environment will be setup correctly. If you are using NMake or MinGW, make sure that the compiler can run from your environment. You can either directly set the required environment variables for your compiler or use a shell in which they are already set. For example, Microsoft Visual Studio has an option on the start menu for creating a Visual Studio Command Prompt. This opens up a command prompt window that has its environment already setup for Visual Studio. You should run cmake-gui from this command prompt if you want to use NMake Makefiles. The same approach applies to MinGW; you should run cmake-gui from a MinGW shell that has a working compiler in its path.<br />
<br />
When cmake-gui finishes, it will have generated the build files in the binary directory you specified. If Visual Studio was selected as the generator, a MSVC workspace (or solution) file is created. This file's name is based on the name of the project you specified in the project() (page 327) command at the beginning of your CMakeLists file. For many other generator types, Makefiles are generated. The next step in this process is to open the workspace with MSVC. Once open, the project can be built in the normal manner of Microsoft Visual C++. The ALL_BUILD target can be used to build all of the libraries and executables in the package. If you are using a Makefile build type, then you would build by running make or nmake on the resulting Makefiles.<br />
<br />
<<Figure 2.2: Selecting a Generator>><br />
<br />
====Running the ccmake Curses Interface====<br />
<br />
On most UNIX platforms, if the curses library is supported, CMake provides an executable called ccmake. This interface is a terminal-based text application that is very similar to the Qt-based GUI. To run ccmake, change directory (cd) to the directory where you want the binaries to be placed. This can be the same directory as the source code for what we call in-source builds, or it can be a new directory you create. Then run ccmake with the path to the source directory on the command line. For in-source builds, use "." for the source directory. This will start the text interface as shown in Figure 2.3 (in this case, the cache variables are from VTK and most are set automatically).<br />
<br />
<<Figure 2.3 : ccmake running on UNIX>><br />
<br />
Brief instructions are displayed in the bottom of the window. If you hit the "c" key, it will configure the project. You should always configure after changing values in the cache. To change values, use the arrow keys to select cache entries, and hit the enter key to edit them. Boolean values will toggle with the enter key. Once you have set all the values as you like, you can hit the "g" key to generate the Makefiles and exit. You can also hit "h" for help, "q" to quit, and "t" to toggle the viewing of advanced cache entries. Two examples of CMake usage on the UNIX platform follow for a hello world project called Hello. In the first example, an in-source build is performed.<br />
<br />
<syntaxhighlight lang="text"><br />
cd Hello<br />
ccmake .<br />
make<br />
</syntaxhighlight><br />
<br />
In the second example, an out-of-source build is performed.<br />
<br />
<syntaxhighlight lang="text"><br />
mkdir Hello-Linux<br />
cd Hello-Linux<br />
ccmake ../Hello<br />
make<br />
</syntaxhighlight><br />
<br />
<br />
====Running CMake from the Command Line====<br />
<br />
From the command line, CMake can be run as an interactive question-and-answer session or as a non-interactive program. To run in interactive mode, just pass the "-i" option to CMake. This will cause CMake to ask you for a value for each entry in the cache file for the project. CMake will provide reasonable defaults, just like it does in the GUI and curses-based interfaces. The process stops when there are no longer any more questions to ask. An example of using the interactive mode of CMake is provided below.<br />
<br />
<syntaxhighlight lang="text"><br />
$ cmake -i -G "NMake Makefiles" ../CMake<br />
Would you like to see advanced options? [No]:<br />
Please wait while cmake processes CMakeLists.tat files....<br />
<br />
Variable Name: BUILD_TESTING<br />
Description: Build the testing tree.<br />
Current Value: ON<br />
New Value (Enter to keep current value):<br />
<br />
Variable Name: CMAKE_INSTALL_PREFIX<br />
Description: Install path prefix, prepended onto install directories.<br />
Current Value: C:/Program Files/CMake<br />
New Value (Enter to keep current value}:<br />
<br />
Please wait while cmake processes CMakebists.txt files....<br />
<br />
CMake complete, run make to build project.<br />
</syntaxhighlight><br />
<br />
Using CMake to build a project in non-interactive mode is a simple process if the project has few or no options. For larger projects like VTK, using ccmake, cmake -i, or cmake-gui is recommended. To build a project with a non-interactive CMake, first change directory to where you want the binaries to be placed. For an in-source build, run cmake. and pass in any options using the -D flag. For out-of-source builds, the process is the same except you run cmake and also provide the path to the source code as its argument. Then type make and your project should compile. Some projects will have install targets as well and you can type make install to install them.<br />
<br />
====Specifying the Compiler to CMake====<br />
<br />
On some systems, you may have more than one compiler to choose from or your compiler may be in a non-standard place. In these cases, you will need to specify to CMake where your desired compiler is located. There are three ways to specify this: the generator can specify the compiler; an environment variable can be set; or a cache entry can be set. Some generators are tied to a specific compiler; for example, the Visual Studio 8 generator always uses the Microsoft Visual Studio 8 compiler. For Makefile-based generators, CMake will try a list of usual compilers until it finds a working one. The list can be found in the files:<br />
<br />
<syntaxhighlight lang="text"><br />
Modules/CmakeDetermineCCompiler.cmake and<br />
Modules/CmakeDetermineCXXCompiler.cmake<br />
</syntaxhighlight><br />
<br />
The lists can be preempted with environment variables that can be set before CMake is run. The CC environment variable specifies the C compiler, while CXX specifies the C++ compiler. You can specify the compilers<br />
directly on the command line by using -D CMAKE_CXX_COMPILER=cl for example.<br />
<br />
Once CMake has been run and picked a compiler, you can change the selection by changing the cache entries CMAKE_CXX_COMPILER and CMAKE_C_COMPILER, although this is not recommended. The problem with doing this is that the project you are configuring may have already run some tests on the compiler to determine what it supports. Changing the compiler does not normally cause these tests to be rerun, which can lead to incorrect results. If you m ust change the compiler, start over with an empty binary directory. The flags for the compiler and the linker can also be changed by setting environment variables. Setting LDFLAGS will initialize the cache values for link flags, while CXXFLAGS and CFLAGS will initialize CMAKE_CXX_FLAGS and CMAKE_C_FLAGS respectively.<br />
<br />
<br />
====Dependency Analysis====<br />
<br />
CMake has powerful, built-in implicit dependency (#include) analysis capabilities for C, C++, and Fortran source code files. CMake also has limited support for Java dependencies. Since Integrated Development Environments (IDEs) support and maintain their own dependency information, CMake skips this step for those build systems. However, Makefiles with a make program do not know how to automatically compute and keep dependency information up-to-date. For these builds, CMake automatically computes dependency information for C, C++, and Fortran files. Both the generation and maintenance of these dependencies are automatically done by CMake. Once a project is initially configured by CMake, users only need to run make, and CMake does the rest of the work. CMake's dependencies fully support parallel bui lds for multiprocessor systems.<br />
<br />
Although users do not need to know how CMake does this work, it may be useful to look at the dependency information files for a project. The information for each target is stored in four files called depend.make, flags.make, build.make, and DependInfo.cmake. depend.make stores the depend information for all the object files in the directory. flags.make contains the compile flags used for the source files of this target. If they change then the fi les will be recompiled. DependInfo.cmake is used to keep the dependency information up-to-date and contains information about which files are part of the project and the languages they are in. Finally, the rules for building the dependencies are stored in build.make. If a dependency is out-of-date then all of the dependencies for that target will be recomputed, keeping the dependency information current.<br />
<br />
===Editing C Makelists Files===<br />
<br />
CMakeLists files can be edited in almost any text editor. Some editors, such as Notepad++, come with CMake syntax highlighting and indentation support built-in. For editors such as Emacs or Vim , CMake includes indentation and syntax highlighting modes. These can be found in the Auxiliary directory of the source distribution, or downloaded from the CMake web site. The file cmake-mode.el is the Emacs mode, and cmake-indent.vim and cmake-syntax.vim are used by Vim. Within Visual Studio, CMakeLists files are listed as part of the project and you can edit them simply by double-clicking on them. Within any of the supported generators (Makefiles, Visual Studio, etc.), if you edit a CMakeLists file and rebuild, there are rules that will automatically invoke CMake to update the generated files (e.g. Makefiles or project files) as required. This helps to assure that your generated files are always in sync with your CMakeLists files.<br />
<br />
Since CMake computes and maintains dependency information, CMake executables must always be available (though they don't have to be in your PATH) when make or an IDE is being run on CMake-generated files. This means that if a CMake input file changes on disk, your build system will automatically re-run CMake and produce up-to-date build files. For this reason, you generally should not generate Makefiles or projects with CMake and move them to another machine that does not have CMake installed.<br />
<br />
===Setting Initial Values for CMake===<br />
<br />
While CMake works well in an interactive mode, sometimes you will need to set up cache entries without running a GUI. This is common when setting up nightly dashboards, or if you will be creating many build trees with the same cache values. In these cases, the CMake cache can be initialized in two different ways. The first way is to pass the cache values on the CMake command line using -D CACHE_VAR:TYPE=VALUE arguments. For example, consider the following nightly dashboard script for a UNIX machine:<br />
<br />
<syntaxhighlight lang="text"><br />
#!/bin/tcsh<br />
<br />
cd ${HOME}<br />
<br />
# wipe out the old binary tree and then create it again<br />
rm -rf Foo-Linux<br />
mkdir Foo-Linux<br />
cd Foo-Linux<br />
<br />
# run cmake to setup the cache<br />
cmake DBUILD_TESTING:BOOL=ON <etc...> ../Foo<br />
<br />
# generate the dashboard<br />
ctest -D Nightly<br />
</syntaxhighlight><br />
<br />
The same idea can be used with a batch file on Windows. The second way is to create a file to be loaded using CMake's -C option. In this case, instead of setting up the cache with -D options, it is done though a file that is parsed by CMake. The syntax for this file is the standard CMakeLists syntax, which is typically a series of set() (page 330) commands such as:<br />
<br />
<syntaxhighlight lang="text"><br />
#Build the vtkHybrid kit.<br />
set (VTK_USE_HYBRID ON CACHE BOOL "doc string")<br />
</syntaxhighlight><br />
<br />
In some cases there might be an existing cache, and you want to force the cache values to be set a certain way. For example, say you want to turn Hybrid on even if the user has previously run CMake and turned it off. Then you can do<br />
<br />
<syntaxhighlight lang="text"><br />
#Build the vtkHybrid kit always,<br />
set (VTK_USE_HYBRID ON CACHE BOOL "doc" FORCE)<br />
</syntaxhighlight><br />
<br />
Another option is that you want to set and then hide options so the user will not be tempted to adjust them later on. This can be done using the following commands<br />
<br />
<syntaxhighlight lang="text"><br />
#Build the vtkHybrid kit always and dont't distract<br />
#the user by showing the option.<br />
set (VTK_USE_HYBRID ON CACHE INTERNAL "doc" FORCE)<br />
make_as_advanced (VTK_USE_HYBRID)<br />
</syntaxhighlight><br />
<br />
You might be tempted to edit the cache file directly, or to "initialize" a project by giving it an initial cache file. This may not work and could cause additional problems in the future. First, the syntax of the CMake cache is subject to change. Second, cache files contain full paths which make them unsuitable for moving between binary trees. If you want to initialize a cache file, use one of the two standard methods described above.<br />
<br />
<br />
===Building Your Project===<br />
<br />
After you have run CMake, your project will be ready to be built. If your target generator is based on Makefiles then you can build your project by changing the directory to your binary tree and typing make (or gmake or nmake as appropriate). If you generated files for an IDE such as Visual Studio, you can start your IDE, load the project files into it, and build as you normally would.<br />
<br />
Another option is to use CMake's --build option from the command line. This option is simply a convenience that allows you to build your project from the command line, even if that requires launching an IDE. The command line options for --build include:<br />
<br />
<syntaxhighlight lang="text"><br />
Usage: cmake --build <dir> [options] [-- [native-options) ]<br />
<br />
Options:<br />
<dir> = Project binary directory to be built.<br />
--target <tgt> = Build <tgt> instead of default targets.<br />
--config <cfg> = For multi-configuration tools, choose <cfg>.<br />
--clean-first = Build target 'clean' first, then build.<br />
= (To clean only, use --target 'clean' .)<br />
<br />
-- = Pass remaining options to the native tool.<br />
</syntaxhighlight><br />
<br />
Even if you are using Visual Studio as your generator, type the following to build your project from the command line if you wish:<br />
<br />
<syntaxhighlight lang="text"><br />
cmake --build <your binary dir><br />
</syntaxhighlight><br />
<br />
That is all there is to installing and running CMake for simple projects. In the following chapters, we will consider CMake in more detail and explain how to use it on more complex software projects.<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:MastringCmakeVersion31]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=MastringCmakeVersion31:Chapter_12&diff=5613MastringCmakeVersion31:Chapter 122020-09-21T12:09:23Z<p>Onionmixer: CMAKE Chapter 12</p>
<hr />
<div>==CHAPTER TWELVE::TUTORIAL==<br />
<br />
This chapter provides a step-by-step tutorial that covers common build system i ssues that CMake helps ad<br />
dress. Many of these topics have been introduced in prior chapters as separate issues, but seei ng how they<br />
all work together in an example project can be very helpful. Thi s tutorial can be found in the Testsffutorial<br />
directory of the CMake source code tree. Each step has its own subdirectory containing a complete copy of<br />
the tutorial for that step.<br />
<br />
<br />
===A Basic Starting Point (Step 1)===<br />
<br />
The most basic project is an executable built from source code files. For simple projects, a two line CMake Lists file is all that is required. This will be the starting point for our tutorial. The CMakeLists file looks like<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required (2.6)<br />
project (Tutorial)<br />
<br />
add_executable (Tutorial tutorial.cxx)<br />
</syntaxhighlight><br />
<br />
Note that this example uses lower case commands in the CMakeLists file. Upper, lower, and mixed case commands are supported by CMake. The source code for tutorial.cxx will compute the square root of a number and the first version of it is very simple, as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
// A simple program that computes the square root of a number<br />
#include <stdio.h><br />
#include <math.h><br />
int main (int argc, char *argv[})<br />
{<br />
if (arge < 2)<br />
{<br />
fprintf(stdout, "Usage: %s number\n",argv[0]);<br />
return 1;<br />
}<br />
double inputValue = atof(argv[1]);<br />
double outputValue = sqrt (inputValue);<br />
fprintf(stdout, "The square root of %g is %g\n",<br />
inputValue, outputValue) ;<br />
<br />
return 0;<br />
}<br />
</syntaxhighlight><br />
<br />
<br />
====Adding a Version Number and Configured Header File====<br />
<br />
The first feature we will add is to provide our executable and project with a version number. While you can<br />
do this exclusively in the source code, doing it in the CMakeLists file provides more flexibility. To add a<br />
version number we modify the CMakeLists file as follows<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required (2.6)<br />
project (Tutorial)<br />
<br />
# The version number.<br />
set (Tutorial_VERSION_MAJOR 1)<br />
set (Tutorial_VERSION_MINOR 0)<br />
<br />
# configure a header file to pass some of the CMake settings<br />
# to the source code<br />
configure_file (<br />
"${PROJECT_SOURCE_DIR}/TutorialConfig.h.in"<br />
"${PROJECT_BINARY_DIR}/TutorialConfig.h"<br />
)<br />
<br />
# add the binary tree to the search path for include files<br />
# so that we will find TutorialConfig.h<br />
include_directories ("${PROJECT_BINARY_DIR}")<br />
<br />
# add the executable<br />
add_executable(Tutorial tutorial.cxx)<br />
</syntaxhighlight><br />
<br />
Since the configured file will be written into the binary tree, we must add that directory to the list of paths<br />
to search for include files. We then create a TutorialConfig.h.in file in the source tree with the following contents:<br />
<br />
<syntaxhighlight lang="text"><br />
// the configured options and settings for Tutorial<br />
#define Tutorial_VERSION_MAJOR @Tutorial_VERSION_MAJOR@<br />
#define Tutorial_VERSION_MINOR @Tutorial_VERSION_MINOR@<br />
</syntaxhighlight><br />
<br />
When CMake configures this header file, the values for @Tutorial_VERSION_MAJOR@ and @Tutorial_VERSION_MINOR@ will be replaced by the values from the CMakeLists file. Next, we modify tutorial.cxx to include the configured header file and to make use of the version numbers. The resulting source code is listed below.<br />
<br />
<syntaxhighlight lang="text"><br />
// A simple program that computes the square root of a number<br />
#include <stdio.h><br />
#include <math.h><br />
#include "TutorialConfig.h"<br />
<br />
int main (int argc, char *argv[])<br />
{<br />
if (arge < 2)<br />
{<br />
fprintf(stdout, "%s Version %d.%d\n",<br />
argv[0],<br />
Tutorial_VERSION_MAJOR,<br />
Tutorial_VERSION_MINOR);<br />
fprintf(stdout, "Usage: %s number\n",argv[0]);<br />
<br />
return 1;<br />
}<br />
<br />
double inputValue = atof(argv[1]);<br />
double outputValue = sqrt(inputValue);<br />
fprintf(stdout,"The square root of %g is %g\n",<br />
inputValue, outputValue);<br />
return 0;<br />
}<br />
</syntaxhighlight><br />
<br />
The main changes are the inclusion of the TutorialConfig.h header file and printing out a version number as part of the usage message.<br />
<br />
<br />
===Adding a Library (Step 2)===<br />
<br />
Now we will add a library to our project. This library will contain our own implementation for computing the square root of a number. The executable can then use this library instead of the standard square root function provided by the compiler. For this tutorial we will put the library into a subdirectory called MathFunctions. It will have the following one line CMakeLists file<br />
<br />
<syntaxhighlight lang="text"><br />
add_library(MathFunctions mysqrt.cxx)<br />
</syntaxhighlight><br />
<br />
The source file mysqrt.cxx has one function called mysqrt that provides similar functionality to the compiler's sqrt function. To make use of the new library we add an add_subdirectory() (page 277) call in the top level CMakeLists file so that the library will get built. We also add another include directory so that the MathFunctions/mysqrt.h header file can be found for the function prototype. The last change is to add the new library to the executable. The last few lines of the top level CMakeLists file now look like<br />
<br />
<syntaxhighlight lang="text"><br />
include directories ("%${PROJECT_SOURCE_DIR}/MathFunctions")<br />
add_subdirectory (MathFunctions)<br />
<br />
# add the executable<br />
<br />
add_executable (Tutorial tutorial.cxx)<br />
target_link_libraries (Tutorial MathFunctions)<br />
</syntaxhighlight><br />
<br />
Now, let us consider making the MathFunctions library optional. In this tutorial there really isn't any reason to do so, but with larger libraries or libraries that rely on third party code you might want to. The first step is to add anoption() (page 327) to the top level CMakeLists file.<br />
<br />
<syntaxhighlight lang="text"><br />
# should we use our own math functions?<br />
option (USE_MYMATH<br />
"Use tutorial provided math implementation" ON)<br />
</syntaxhighlight><br />
<br />
This will show up in the CMake GUI with a default value of ON that the user can change as desired. This setting will be stored in the cache so that the user does not need to keep setting it each time they run CMake on this project. The next change is to make the build and linking of the MathFunctions library conditional. To do this we change the end of the top level CMakeLists file to look like the following<br />
<br />
<syntaxhighlight lang="text"><br />
# add the MathFunctions library?<br />
#<br />
if (USE_MYMATH)<br />
include_directories ("${PROJECT_SOURCE_DIR}/MathFunctions")<br />
add_subdirectory (MathFunctions)<br />
set (EXTRA_LIBS ${EXTRA_LIBS} MathFunctions)<br />
endif (USE_MYMATH)<br />
<br />
# add the executable<br />
add_executable (Tutorial tutorial.cxx)<br />
target_link_libraries (Tutorial ${EXTRA_LIBS})<br />
</syntaxhighlight><br />
<br />
<br />
This uses the setting of USE_MYMATH to determine if the MathFunctions should be compiled and used. Note the use of a variable (EXTRA_LIBS in this case) to collect up any optional libraries to later be linked into the executable. This is a common approach used to keep larger projects with many optional components clean. The corresponding changes to the source code are fairly straight forward and leave us with:<br />
<br />
<syntaxhighlight lang="text"><br />
// A simple program that computes the square root of a number<br />
#include <stdio.h><br />
#include <math.h><br />
#include "TutorialConfig.h"<br />
<br />
#ifdef USE_MYMATH<br />
#include "MathFunctions.h"<br />
#endif<br />
<br />
int main (int argc, char *argv([))<br />
{<br />
if (arge < 2)<br />
{<br />
fprintf(stdout, "%s Version $d.%d\n", argv[0],<br />
Tutorial. VERSION_MAJOR,<br />
Tutorial_VERSION_MINOR) ;<br />
fprintf(stdout, "Usage: %s number\n",argv[0]);<br />
<br />
return 1;<br />
}<br />
<br />
double inputValue = atof(argv[1]);<br />
<br />
#ifdef USE_MATH<br />
double outputvalue = mysqrt(inputValue);<br />
#else <br />
double outputvalue = sqrt(inputValue);<br />
#endif<br />
<br />
fprintf(stdout,"The square root of %g is %g\n",<br />
inputValue, outputValue);<br />
return 0;<br />
) <br />
</syntaxhighlight><br />
<br />
In the source code we make use of USE_MYMATH as well. This is provided from CMake to the source code through the TutorialConfig.h.in configured file by adding the following line to it:<br />
<br />
<syntaxhighlight lang="text"><br />
cmakedefine USE_MATH<br />
</syntaxhighlight><br />
<br />
<br />
===Installing and Testing (Step 3)===<br />
<br />
For the next step we will add install rules and testing support to our project. The install rules are fairly straight forward. For the MathFunctions library we setup the library and the header file to be installed by adding the following two lines to MathFunctions'CMakeLists file<br />
<br />
<syntaxhighlight lang="text"><br />
install (TARGETS MathFunctions DESTINATION bin)<br />
install (FILES MathFunctions.h DESTINATION include)<br />
</syntaxhighlight><br />
<br />
For the application, the following lines are added to the top level CMakeLists file to install the executable and the configured header file:<br />
<br />
<syntaxhighlight lang="text"><br />
# add the install targets<br />
install (TARGETS Tutorial DESTINATION bin)<br />
install (FILES "${PROJECT_BINARY_DIR}/TutorialConfig.h"<br />
DESTINATION include)<br />
</syntaxhighlight><br />
<br />
That is all there is to it. At this point you should be able to build the tutorial, then type make install (or build the INSTALL target from an IDE), and it will install the appropriate header files, libraries, and executables. The CMake variable CMAKE_INSTALL_PREFIX is used to determine the root of where the files will be installed. Adding testing is also a fairly straight forward process. At the end of the top level CMakeLists file we can add a number of basic tests to verify that the application is working correctly.<br />
<br />
<syntaxhighlight lang="text"><br />
# does the application run<br />
add_test (TutorialRuns Tutorial 25)<br />
<br />
# does it sqrt of 25<br />
add_test (TutorialComp25 Tutorial 25)<br />
<br />
set_tests_properties (TutorialComp25<br />
PROPERTIES PASS_REGULAR_EXPRESSION "25 is 5")<br />
<br />
# does it handle negative numbers<br />
add_test (TutorialNegative Tutorial -25)<br />
set_tests_properties (TutorialNegative<br />
PROPERTIES PASS_REGULAR_EXPRESSION "-25 is 0")<br />
<br />
# does it handle small numbers<br />
add_test (TutorialSmall Tutorial 0.0001)<br />
set_tests_properties (TutorialSmall<br />
PROPERTIES PASS_REGULAR_EXPRESSION "0.0001 is 0.01")<br />
<br />
# does the usage message work?<br />
add_test (TutorialUsage Tutorial)<br />
set_tests_properties (TutorialUsage<br />
PROPERTIES<br />
PASS_REGULAR_EXPRESSION "Usage: .*number")<br />
</syntaxhighlight><br />
<br />
The first test simply verifies that the application runs, does not segfault or otherwise crash, and has a zero return value. This is the basic form of a CTest test. The next few tests all make use of the PASS_REGULAR_EXPRESSION test property to verify that the output of the test contains certain strings, in this case: verifying that the computed square root is what it should be and that the usage message is printed when an incorrect number of arguments are provided. If you wanted to add a lot of tests to test different input values you might consider creating a macro() (page 324) like the following:<br />
<br />
<syntaxhighlight lang="text"><br />
#define a macro to simplify adding tests, then use it<br />
Macro (do_test arg result)<br />
add_test (TutorialComp${arg} Tutorial ${arg})<br />
set_tests_properties (TutorialComp${arg}<br />
PROPERTIES PASS_REGULAR_EXPRESSION ${result})<br />
endmacro (do_test)<br />
<br />
# do a bunch of result based tests<br />
do_test (25 "25 is 5")<br />
do_test (-25 "-25 is 0")<br />
</syntaxhighlight><br />
<br />
For each invocation of do_test, another test is added to the project with a name, input, and results based on the passed arguments.<br />
<br />
<br />
===Adding System Introspection (Step 4)===<br />
<br />
Next let us consider adding some code to our project that depends on features the target platform may not have. For this example we will add some code that depends on whether or not the target platform has the log and exp functions. Of course, almost every platform has these functions, but for this tutorial assume that they are less common. If the platform has log then we will use that to compute the square root in the mysqrt function. We first test for the availability of these functions using the Check FunctionExists.cmake macro in the top level CMakeLists file as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
# does this system provide the log and exp functions?<br />
include (CheckFunctionExists.cmake)<br />
check_function_exists (log HAVE_LOG)<br />
check_function_exists (exp HAVE_EXP)<br />
</syntaxhighlight><br />
<br />
Next we modify the TutorialConfig.h.in to define those values if CMake found them on the platform as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
// does the platform provide exp and log functions?<br />
#cmakedefine HAVE_LOG<br />
#cmakedefine HAVE_EXP<br />
</syntaxhighlight><br />
<br />
It is important that the tests for log and exp are done before the configure_file() (page 282) command for TutorialConfig.h. The configure_file command immediately configures the file using the current settings in CMake. Finally, in the mysqrt function we can provide an alternate implementation based on log and exp if they are available on the system using the following code:<br />
<br />
<syntaxhighlight lang="text"><br />
// if we have both log and exp then use them<br />
#if defined (HAVE_LOG) && defined (HAVE_EXP)<br />
result = exp(log(x) * 0.5);<br />
#else // otherwise use an iterative approach<br />
...<br />
</syntaxhighlight><br />
<br />
<br />
===Adding a Generated File and Generator (Step 5)===<br />
<br />
In this section we will show how you can add a generated source file into the build process of an application. For this example, we will create a table of precomputed square roots as part of the build process, and then compile that table into our application. To accomplish this we first need a program that will generate the table. In the MathFunctions subdirectory a new source file named MakeTable.cxx will do just that.<br />
<br />
<syntaxhighlight lang="text"><br />
// A simple program that builds a sqrt table<br />
#include <stdio.h><br />
#include <math.h><br />
<br />
int main (int argc, char *argv[])<br />
{<br />
int i;<br />
double result;<br />
<br />
// make sure we have enough arguments<br />
if (arge < 2)<br />
{<br />
return 1;<br />
}<br />
<br />
// open the output file<br />
FILE *fout = fopen(argv[{1],"w");<br />
if (!fout)<br />
{<br />
return 1;<br />
}<br />
<br />
// create a source file with a table of square roots<br />
fprintf (fout,"double sqrtTable[] = {\n");<br />
for (i = 0; i < 10; ++i)<br />
{<br />
result = sqrt (static_cast<double>(i));<br />
fprintf (fout, "%g, \n", result);<br />
}<br />
<br />
// close the table with a zero<br />
fprintf (fout, "0);\n");<br />
fclose (fout);<br />
return 0;<br />
}<br />
</syntaxhighlight><br />
<br />
Note that the table is produced as valid C++ code and that the name of the file to write the output to is passed in as an argument. The next step is to add the appropriate commands to MathFunctions'CMakeLists file to build the MakeTable executable, and then run it as part of the build process. A few commands are needed to accomplish this, as shown below.<br />
<br />
<syntaxhighlight lang="text"><br />
# first we add the executable that generates the table<br />
add_executable (MakeTable MakeTable.cxx)<br />
<br />
# add the command to generate the source code<br />
add_custom_command (<br />
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/Table.h<br />
COMMAND MakeTable ${CMAKE_CURRENT_BINARY_DIR}/Table.h<br />
DEPENDS MakeTable<br />
)<br />
<br />
# add the binary tree directory to the search path for<br />
# include files<br />
include_directories( ${CMAKE_CURRENT_BINARY_DIR} )<br />
<br />
# add the main library<br />
add_library (MathFunctions mysqrt.cxx<br />
${CMAKE_CURRENT_BINARY_DIR}/Table.h )<br />
</syntaxhighlight><br />
<br />
First, the executable for MakeTable is added as any other executable would be added. Then we add a custom command that specifies how to produce Table.h by running MakeTable. Next, we have to let CMake know that mysqrt.cxx depends on the generated file Table.h. This is done by adding the generated Table.h to the list of sources for the library MathFunctions. We also have to add the current binary directory to the list of include directories so that Table.h can be found and included by mysqrt.cxx.<br />
<br />
When this project is built, it will first build the MakeTable executable. It will then run MakeTable to produce Table.h. Finally, it will compile mysqrt.cxx which includes Table.h to produce the MathFunctions library.<br />
<br />
At this point the top level CMakeLists file with all the features we have added looks like the following<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required (2.6)<br />
project (Tutorial)<br />
<br />
# The version number.<br />
set (Tutorial_VERSION_MAJOR 1)<br />
set (Tutorial_VERSION_MINOR Q)<br />
<br />
# does this system provide the log and exp functions?<br />
include (${CMAKE_ROOT} /Modules/CheckFunctionExists.cmake)<br />
<br />
check_function_exists (log HAVE_LOG)<br />
check_function_exists (exp HAVE_EXP)<br />
<br />
# should we use our own math functions<br />
option (USE_MYMATH<br />
"Use tutorial provided math implementation" ON)<br />
<br />
# configure a header file to pass some of the CMake settings<br />
# to the source code<br />
configure_file (<br />
"${PROJECT_SOURCE_DIR}/TutorialConfig.h.in"<br />
"${PROJECT_BINARY_DIR}/TutorialConfig.h"<br />
)<br />
<br />
# add the binary tree to the search path for include files<br />
# so that we will find TutorialConfig.h<br />
include_directories ("${PROJECT_BINARY_DIR}")<br />
<br />
# add the MathFunctions library?<br />
if (USE_MYMATH)<br />
include_directories ("${PROJECT_SOURCE_DIR}/MathFunctions")<br />
add_subdirectory (MathFunctions)<br />
set (EXTRA_LIBS ${EXTRA_LIBS} MathFunctions)<br />
endif (USE_MYMATH)<br />
<br />
# add the executable<br />
add_executable (Tutorial tutorial.cxx)<br />
target_link_libraries (Tutorial ${EXTRA_LIBS})<br />
<br />
# add the install targets<br />
install (TARGETS Tutorial DESTINATION bin)<br />
install (FILES "${PROJECT_BINARY_DIR}/TutorialConfig.h"<br />
DESTINATION include)<br />
<br />
# does the application run<br />
add_test (TutorialRuns Tutorial 25)<br />
<br />
# does the usage message work?<br />
add_test (TutorialUsage Tutorial)<br />
set_tests_ properties (TutorialUsage<br />
PROPERTIES<br />
PASS REGULAR_EXPRESSION "Usage: .*number"<br />
)<br />
<br />
#define a macro to simplify adding tests<br />
macro (do_test arg result)<br />
add_test (TutorialComp${arg} Tutorial ${arg})<br />
set_tests_properties (TutorialComp${arg}<br />
PROPERTIES PASS_REGULAR_EXPRESSION ${result}<br />
)<br />
endmacro (do_test)<br />
<br />
# do a bunch of result based tests<br />
do_test (4 "4 is 2")<br />
do_test (9 "9 is 3")<br />
do_test (5 "5 is 2.236")<br />
do_test (7 "7 is 2.645")<br />
do_test (25 "25 is 5")<br />
do_test (-25 "-25 is 0")<br />
do_test (0.0001 "0.0001 is 0.01")<br />
</syntaxhighlight><br />
<br />
TutorialConfig.h looks like:<br />
<br />
<syntaxhighlight lang="text"><br />
// the configured options and settings for Tutorial<br />
#define Tutorial VERSION_MAJOR @Tutorial_VERSION_MAJOR@<br />
#define Tutorial VERSION_MINOR @Tutorial_VERSION_MINOR@<br />
#cmakedefine USE_MYMATH<br />
<br />
// does the platform provide exp and log functions?<br />
#cmakedefine HAVE_LOG<br />
#cmakedefine HAVE_EXP<br />
</syntaxhighlight><br />
<br />
and the CMakeLists file for MathFunctions looks like<br />
<br />
<syntaxhighlight lang="text"><br />
# first we add the executable that generates the table<br />
add_executable (MakeTable MakeTable.cxx)<br />
# add the command to generate the source code<br />
add_custom_command (<br />
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/Table.h<br />
DEPENDS MakeTable<br />
COMMAND MakeTable ${CMAKE_CURRENT_BINARY_DIR}/Table.h<br />
)<br />
# add the binary tree directory to the search path<br />
# for include files<br />
include_directories ( ${CMAKE_CURRENT_BINARY_DIR} )<br />
<br />
# add the main library<br />
add_library (MathFunctions mysqrt.cxx<br />
${CMAKE_CURRENT_BINARY_DIR}/Table.h)<br />
<br />
install (TARGETS MathFunctions DESTINATION bin)<br />
install (FILES MathFunctions.h DESTINATION include)<br />
</syntaxhighlight><br />
<br />
<br />
===Building an Installer (Step 6)===<br />
<br />
Next, suppose that we want to distribute our project to other people so that they can use it. We want to provide both binary and source distributions on a variety of platforms. This is a little different from the install we did previously in Installing and Testing (Step 3), where we were installing the binaries that we had built from the source code. In this example, we will be building installation packages that support binary installations and package management features as found in cygwin, debian, RPMs etc. To accomplish this we will use CPack to create platform specific installers as described in Chapter 9. Specifically, we need to add a few lines to the bottom of our toplevel CMakeLists.txt file.<br />
<br />
<syntaxhighlight lang="text"><br />
# build a CPack driven installer package<br />
include (InstallRequiredSystemLibraries)<br />
set (CPACK_RESOURCE_FILE_ LICENSE<br />
"${CMAKE_CURRENT_SOURCE_DIR}/License.txt")<br />
set (CPACK_PACKAGE_VERSION_MAJOR "${Tutorial_VERSION_MAJOR}")<br />
set (CPACK_PACKAGE_VERSION_MINOR "${Tutorial_VERSION_MINOR}")<br />
include (CPack)<br />
</syntaxhighlight><br />
<br />
That is all there is to it. We start by including InstallRequiredSystemLibraries. This module will include any<br />
runtime libraries that are needed by the project for the current platform. Next, we set some CPack variables<br />
to where we have stored the license and version information for this project. The version information makes<br />
use of the variables we set earlier in this tutorial. Finally, we include the CPack module which will use these<br />
variables and some other properties of the system you are on to setup an installer.<br />
<br />
The next step is to build the project in the usual manner and then run CPack on it. To build a binary distribution, you would run:<br />
<br />
<syntaxhighlight lang="text"><br />
cpack -C CPackConfig.cmake<br />
</syntaxhighlight><br />
<br />
To create a source distribution, you would type<br />
<br />
<syntaxhighlight lang="text"><br />
cpack -C CPackSourceConfig.cmake<br />
</syntaxhighlight><br />
<br />
<br />
===Adding Support for a Dashboard (Step 7)===<br />
<br />
Adding support for submitting our test results to a dashboard is very easy. We already defined a number of tests for our project in the earlier steps of this tutorial. We just have to run those tests and submit them to a dashboard. To include support for dashboards, we include the CTest module in our toplevel CMakeLists file.<br />
<br />
<syntaxhighlight lang="text"><br />
# enable dashboard scripting<br />
include (CTest)<br />
</syntaxhighlight><br />
<br />
We also create a CTestConfig.cmake file where we can specify the name of this project for the dashboard.<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_PROJECT_NAME "Tutorial")<br />
</syntaxhighlight><br />
<br />
CTest will read in this file when it runs. To create a simple dashboard you can run CMake on your project, change directory to the binary tree, and then run ctest -D Experimental. The results of your dashboard will be uploaded to Kitware's public dashboard at:<br />
<br />
<syntaxhighlight lang="text"><br />
http://www.cdash.org/CDash/index.php?project=PublicDashboard<br />
</syntaxhighlight><br />
<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:MastringCmakeVersion31]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=MastringCmakeVersion31:Chapter_10&diff=5612MastringCmakeVersion31:Chapter 102020-09-21T12:08:41Z<p>Onionmixer: note 추가</p>
<hr />
<div>==CHAPTER TEN::AUTOMATION & TESTING WITH CMAKE==<br />
<br />
===Testing with CMake, CTest, and CDash===<br />
<br />
Testing is a key tool for producing and maintaining robust, valid software. This chapter will examine the tools that are part of CMake to support software testing. We will begin with a brief discussion of testing approaches, and then discuss how to add tests to your software project using CMake. Finally we will look at additional tools that support creating centralized software status dashboards.<br />
<br />
The tests for a software package may take a number of forms. At the most basic level there are smoke tests, such as one that simply verifies that the software compiles. While this may seem like a simple test, with the wide variety of platforms and configurations available, smoke tests catch more problems than any other type of test. Another form of smoke test is to verify that a test runs without crashing. This can be handy for situations where the developer does not want to spend the time creating more complex tests, but is willing to run some simple tests. Most of the time these simple tests can be small example programs. Running them verifies not only that the build was successful, but that any required shared libraries can be loaded (for projects that use them), and that at least some of the code can be executed without crashing.<br />
<br />
Moving beyond basic smoke tests leads to more specific tests such as regression, black-, and white-box testing. Each of these has its strengths. Regression testing verifies that the results of a test do not change over time or platform. This is very useful when performed frequently, as it provides a quick check that the behavior and results of the software have not changed. When a regression test fails, a quick look at recent code changes can usually identify the culprit. Unfortunately, regression tests typically require more effort to create than other tests.<br />
<br />
White- and black-box testing refer to tests written to exercise units of code (at various levels of integration), with and without knowledge of how those units are implemented respectively. White-box testing is designed to stress potential failure points in the code knowing how that code was written, and hence its weaknesses. As with regression testing, this can take a substantial amount of effort to create good tests. Black-box testing typically knows little or nothing about the implementation of the software other than its public API. Black-box testing can provide a lot of code coverage without too much effort in developing the tests. This is especially true for libraries of object oriented software where the APis are well defined. A black-box test can be written to go through and invoke a number of typical methods on all the classes in the software.<br />
<br />
The final type of testing we will discuss is software standard compliance testing. While the other test types we have discussed are focused on determining if the code works properly, compliance testing tries to determine if the code adheres to the coding standards of the software project. This could be a check to verify that all classes have implemented some key method, or that all functions have a common prefix. The options for this type of test are limitless and there are a number of ways to perform such testing. There are software analysis tools that can be used, or specialized test programs (maybe python scripts etc) could be written. The key point to realize is that the tests do not necessarily have to involve running some part of the software. The tests might run some other tool on the source code itself.<br />
<br />
There are a number of reasons why it helps to have testing support integrated into the build process. First, complex software projects may have a number of configuration or platform-dependent options. The build system knows what options can be enabled and can then enable the appropriate tests for those options. For example, the Visualization Toolkit (VTK) includes support for a parallel processing library called MPI. If VTK is built with MPI support then additional tests are enabled that make use of MPI and verify that the MPI-specific code in VTK works as expected. Secondly, the build system knows where the executables will be placed, and it has tools for finding other required executables (such as perl, python etc). The third reason is that with UNIX Makefiles it is common to have a test target in the Makefile so that developers can type make test and have the test(s) run. In order for this to work, the build system must have some knowledge of the testing process.<br />
<br />
<br />
===How Does CMake Facilitate Testing?===<br />
<br />
CMake facilitates testing your software through special testing commands and the CTest executable. First, we will discuss the key testing commands in CMake. To add testing to a CMake-based project, simply include(CTest) (page 317) and use the add_test() (page 277) command. The add_test command has a simple syntax as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
add_test (NAME TestName COMMAND ExecutableToRun arg1 arg2 ...)<br />
</syntaxhighlight><br />
<br />
The first argument is simply a string name for the test. This is the name that will be displayed by testing<br />
programs. The second argument is the executable to run. The executable can be built as part of the project<br />
or it can be a standalone executable such as python, perl , etc. The remaining arguments will be passed to the<br />
running executable. A typical example of testing using the a dd_t e s t command would look l ike this:<br />
<br />
<syntaxhighlight lang="text"><br />
add_executable (TestInstantiator TestInstantiator.cxx)<br />
target_link_libraries (TestInstantiator vtkCommon)<br />
add_test (NAME TestInstantiator<br />
COMMAND TestInstantiator)<br />
</syntaxhighlight><br />
<br />
The add_test command is typically placed in the CMakeLists file for the directory that has the test in it. For large projects, there may be multiple CMakeLists files with add_test commands in them. Once the add_test commands are present in the project, the user can run the tests by invoking the "test" target of Makefile, or the RUN_TESTS target of Visual Studio or Xcode. An example of running tests on the CMake tests using the Makefile generator on Linux would be:<br />
<br />
<syntaxhighlight lang="text"><br />
$ make test<br />
Running tests...<br />
Test project<br />
Start 2: kwsys.testEncode<br />
1/20 Test #2: kwsys.testEncode .......... Passed 0.02 sec<br />
Start 3: kwsys.testTerminal<br />
2/20 Test #3: kwsys.testTerminal ........ Passed 0.02 sec<br />
Start 4: kwsys.testAutoPtr<br />
3/20 Test #4: kwsys.testAutoPtr ......... Passed 0.02 sec<br />
</syntaxhighlight><br />
<br />
<br />
===Additional Test Properties===<br />
<br />
By default a test passes if all of the following conditions are true:<br />
<br />
* The test executable was found<br />
* The test ran without exception<br />
* The test exited with return code 0<br />
<br />
That said, these behaviors can be modified using the set_property() (page 329) command:<br />
<br />
<syntaxhighlight lang="text"><br />
set_property (TEST test_name<br />
PROPERTY prop1 value1 value2 ...)<br />
</syntaxhighlight><br />
<br />
This command will set additional properties for the specified tests. Example properties are:<br />
<br />
'''ENVIRONMENT''' Specifies environment variables that should be defined for running a test. If set to a list of environment variables and values of the form MYVAR=value, those environment variables will be defined while the test is running. The environment is restored to its previous state after the test is done.<br />
<br />
'''LABELS''' Specifies a list of text labels associated with a test. These labels can be used to group tests together based on what they test. For example, you could add a label of MPI to all tests that exercise MPI code.<br />
<br />
'''WILL_FAIL''' If this option is set to true, then the test will pass if the return code is not 0, and fail if it is. This reverses the third condition of the pass requirements.<br />
<br />
'''PASS_REGULAR_EXPRESSION''' If this option is specified, then the output of the test is checked against the regular expression provided (a list of regular expressions may be passed in as well). If none of the regular expressions match, then the test will fail. If at least one of them m atches, then the test will pass.<br />
<br />
'''FAIL_REGULAR_EXPRESSION''' If this option is specified, then the output of the test is checked against the regular expression provided (a list of regular expressions may be passed in as well). If none of the regular expressions match, then the test will pass. If at least one of them matches, then the test will fail .<br />
<br />
If both PASS_REGULAR_EXPRESSION (page 614) and FAIL_REGULAR_EXPRESSION (page 613) are specified, then the FAIL_REGULAR_EXPRESSION takes precedence. The following example illustrates using the PASS_REGULAR_EXPRESSION and FAIL_REGULAR_EXPRESSION:<br />
<br />
<syntaxhighlight lang="text"><br />
add_test (NAME outputTest COMMAND outputTest)<br />
set (passRegex "^Test passed" "^*All ok")<br />
set (failRegex "Error" "Fail")<br />
<br />
set_property (TEST outputTest<br />
PROPERTY PASS_REGULAR_EXPRESSION "${passRegex}")<br />
set_property (TEST outputTest<br />
PROPERTY FAIL_REGULAR_EXPRESSION "${failRegex}")<br />
</syntaxhighlight><br />
<br />
<br />
===Testing Using CTest===<br />
<br />
When you run the tests from your build environment, what really happens is that the build environment runs CTest. CTest is an executable that comes with CMake; it handles running the tests for the project. While CTest works well with CMake, you do not have to use CMake in order to use CTest. The main input file for CTest is called CTestTestfile.cmake. This file will be created in each directory that was processed by CMake (typically every directory with a CMakeLists file). The syntax of CTestTestfile.cmake is like the regular CMake syntax, with a subset of the commands available. If CMake is used to generate testing files, they will list any subdirectories that need to be processed as well as any add_test() (page 277) calls. The subdirectories are those that were added by subdirs() (page 350) or add_subdirectory() (page 277) commands. CTest can then parse these files to determine what tests to run. An example of such a file is shown below:<br />
<br />
<syntaxhighlight lang="text"><br />
# CMake generated Testfile for<br />
# Source directory: C:/CMake<br />
# Build directory: C:/CMakeBin<br />
#<br />
# This file includes the relevent testing commands required<br />
# for testing this directory and lists subdirectories to<br />
# be tested as well.<br />
<br />
ADD_TEST (SystemInformationNew ...)<br />
<br />
SUBDIRS (Source/kwsys)<br />
SUBDIRS (Utilities/cmzlib)<br />
...<br />
</syntaxhighlight><br />
<br />
When CTest parses the CTestTestfile.cmake files, it will extract the list of tests from them. These tests will be run, and for each test CTest will display the name of the test and its status. Consider the following sample output:<br />
<br />
<syntaxhighlight lang="text"><br />
$ ctest<br />
Test project C:/CMake-build26<br />
Start 1: SystemInformat ionNew<br />
1/21 Test #1: SystemInformationNew ...... Passed 5.78 sec<br />
Start 2: kwsys.testEncode<br />
2/21 Test #2: kwsys.testEncode .......... Passed 0.02 sec<br />
Start 3: kwsys.testTerminal<br />
3/21 Test #3: kwsys.testTerminal ........ Passed 0.00 sec<br />
Start 4: kwsys.testAutoPtr<br />
4/21 Test #4: kwsys.testAutoPtr ......... Passed 0.02 sec<br />
Start 5: kwsys.testHashSTL<br />
5/21 Test #5: kwsys.testHashSTL ......... Passed 0.02 sec<br />
...<br />
100% tests passed, 0 tests failed out of 21<br />
Total Test time (real) = 59.22 sec<br />
</syntaxhighlight><br />
<br />
CTest is run from within your build tree. It will run all the tests found in the current directory as well as any subdirectories listed in the CTestTestfile.cmake. For each test that is run CTest will report if the test passed and how long it took to run the test.<br />
<br />
The CTest executable includes some handy command line options to make testing a little easier. We will start by looking at the options you would typically use from the command line.<br />
<br />
<syntaxhighlight lang="text"><br />
-R <regex> Run tests matching regular expression<br />
~E <regex> Exclude tests matching regular expression<br />
-L <regex> Run tests with labels matching the regex<br />
-LE <regex> Run tests with labels not matching regexp<br />
-C <config> Choose the configuration to test<br />
-V, --verbose Enable verbose output from tests.<br />
-N, --show-only Disable actual execution of tests.<br />
-I (Start,End,Stride,test#,test#|Test file]<br />
Run specific tests by range and number.<br />
-H Display a help message<br />
</syntaxhighlight><br />
<br />
The -R option is probably the most commonly used. It allows you to specify a regular expression; only the tests with names matching the regular expression will be run . Using the -R option with the name (or part of the name) of a test is a quick way to run a single test. The -E option is similar except that it excludes all tests matching the regular expression. The -L and -LE options are similar to -R and -E, except that they apply to test labels that were set using the set_property() (page 329) command as described in section 0. The -C option is mainly for IDE builds where you might have multiple configurations, such as Release and Debug in the same tree. The argument following the -C determines which configuration will be tested. The -V argument is useful when you are trying to determine why a test is failing. With -V, CTest will print out the command line used to run the test, as well as any output from the test itself. The -V option can be used with any invocation of CTest to provide more verbose output. The -N option is useful if you want to see what tests CTest would run without actually running them.<br />
<br />
Running the tests and making sure they all pass before committing any changes to the software is a sure-fire way to improve your software quality and development process. Unfortunately, for large projects the number of tests and the time required to run them may be prohibitive. In these situations the -I option of CTest can be used. The -I option allows you to flexibly specify a subset of the tests to run. For example, the following invocation of CTest will run every seventh test.<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -I ,,7<br />
</syntaxhighlight><br />
<br />
While this is not as good as running every test, it is better than not running any and it may be a more practical solution for many developers. Note that if the start and end arguments are not specified, as in this example, then they will default to the first and last tests. In another example, assume that you always want to run a few tests plus a subset of the others. In this case you can explicitly add those tests to the end of the arguments for -I . For example:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -I ,,5,1,2,3,10<br />
</syntaxhighlight><br />
<br />
will run tests 1,2,3, and 10, plus every fifth test. You can pass as many test numbers as you want after the stride argument.<br />
<br />
<br />
===Using CTest to Drive Complex Tests===<br />
<br />
Sometimes to properly test a project you need to actually compile code during the testing phase. There are several reasons for this. First, if test programs are compiled as part of the main project, they can end up taking up a significant amount of the build time. Also, if a test fails to build, the main build should not fail as well. Finally, IDE projects can quickly become too large to load and work with. The CTest command supports a group of command line options that allow it to be used as the test executable to run. When used as the test executable, CTest can run CMake, run the compile step, and finally run a compiled test. We will now look at the command line options to CTest that support bui lding and running tests.<br />
<br />
<syntaxhighlight lang="text"><br />
--build-and-test src_directory build_directory<br />
Run cmake on the given source directory using the specified build directory.<br />
--test-command Name of the program to run.<br />
--build-target Specify a specific target to build.<br />
--build-nocmake Run the build without running cmake first.<br />
--build-run-dir Specify directory to run programs from.<br />
--build-two-config Run cmake twice before the build.<br />
--build-exe-dir Specify the directory for the executable.<br />
--build-generator Specify the generator to use.<br />
--build-project Specify the name of the project to build.<br />
--build-makeprogram Specify the make program to use.<br />
--build-noclean Skip the make clean step,<br />
--build-options Add extra options to the build step.<br />
</syntaxhighlight><br />
<br />
For an example, consider the following add_test() (page 277) command taken from the CMakeLists.txt file of CMake itself. It shows how CTest can be used both to compile and run a test.<br />
<br />
<syntaxhighlight lang="text"><br />
add_test (simple ${CMAKE_CTEST_COMMAND }<br />
--build-and-test "${CMAKE_SOURCE_DIR}/Tests/Simple"<br />
"${CMAKE BINARY_DIR}/Tests/Simple"<br />
--build-generator ${CMAKE_GENERATOR}<br />
--build-makeprogram ${CMAKE_MAKE_PROGRAM}<br />
--build-project Simple<br />
--test-command simple)<br />
</syntaxhighlight><br />
<br />
In this example, the add_test command is first passed the name of the test, "simple". After the name of the test, the command to be run is specified, In this case, the test command to be run is CTest. The CTest command is referenced via the CMAKE_CTEST_COMMAND (page 626) variable. This variable is always set by CMake to the CTest command that came from the CMake installation used to build the project. Next, the source and binary directories are specified. The next options to CTest are the -build-generator and -build-makeprogram options. These are specified using the CMake variables CMAKE_MAKE_PROGRAM (page 630) and CMAKE_GENERATOR (page 628). Both CMAKE_MAKE_PROGRAM and CMAKE_GENERATOR are defined by CMake. This is an important step as it makes sure that the same generator is used for building the test as was used for building the project itself. The -build-project option is passed Simple, which corresponds to the project() (page 327) command used in the Simple test. The final argument is the -test-command which tells CTest the command to run once it gets a successful build, and should be the name of the executable that will be compiled by the test.<br />
<br />
<br />
===Handling a Large Number of Tests===<br />
<br />
When a large number of tests exist in a single project, it is cumbersome to have individual executables available for each test. That said, the developer of the project should not be required to create tests with complex argument parsing. This is why CMake provides a convenience command for creating a test driver program. This command is called create_test_sourcelist() (page 282). A test driver is a program that links together many small tests into a single executable. This is useful when building static executables with large libraries to shrink the total required size. The signature for create_test_sourcelist is as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
create_test_sourcelist (SourceListName<br />
DriverName<br />
test1 test2 test3<br />
EXTRA_INCLUDE include.h<br />
FUNCTION function<br />
)<br />
</syntaxhighlight><br />
<br />
The first argument is the variable which will contain the list of source files that must be compiled to make the test executable. The DriverName is the name of the test driver program (e.g. the name of the resulting executable). The rest of the arguments consist of a list of test source files. Each test source file should have a function in it that has the same name as the file with no extension (foo.cxx should have int foo(argc,argv);). The resulting executable will be able to invoke each of the tests by name on the command line. The EXTRA_INCLUDE and FUNCTION arguments support additional customization of the test driver program. Consider the following CMakeLists file fragment to see how this command can be used:<br />
<br />
<syntaxhighlight lang="text"><br />
# create the testing file and list of tests<br />
create_test_sourcelist (Tests<br />
CommonCxxTests.cxx<br />
ObjectFactory.cxx<br />
otherArrays.cxx<br />
otherEmptyCell.cxx<br />
TestSmartPointer.cxx<br />
SystemInformation.cxx<br />
)<br />
<br />
# add the executable<br />
add_executable (CommoncCxxTests ${Tests})<br />
<br />
# remove the test driver source file<br />
set (TestsToRun ${Tests})<br />
remove (TestsToRun CommonCxxTests.cxx)<br />
<br />
# Add all the ADD_TEST for each test<br />
foreach (test ${TestsToRun})<br />
get_filename_component (TName ${test} NAME_WE)<br />
add_test (NAME ${TName} COMMAND CommonCxxTests ${TName})<br />
endforeach ()<br />
</syntaxhighlight><br />
<br />
The create_test_sourcelist command is invoked to create a test driver. In this case it creates and writes CommonCxxTests.cxx into the binary tree of the project, using the rest of the arguments to determine its contents. Next, the add_executable() (page 273) command is used to add that executable to the build. Then a new variable called TestsToRun is created with an initial value of the sources required for the test driver. The remove() (page 349) command is used to remove the driver program itself from the list. Then, a foreach() (page 309) command is used to loop over the remaining sources. For each source, its name without a file extension is extracted and put in the variable TName, then a new test is added for TName. The end result is that for each source file in the create_test_sourcelist an add_test command is called with the name of the test. As more tests are added to the create_test_sourcelist command, the foreach loop will automatically call add_test for each one.<br />
<br />
<br />
===Managing Test Data===<br />
<br />
In addition to handling large numbers of tests, CMake contains a system for managing test data. It is en capsulated in an ExtemalData CMake module, downloads large data on an as-needed basis, retains version information, and allows distributed storage.<br />
<br />
The design of the ExtemalData follows that of distributed version control systems using hash-based file identifiers and object stores, but it also takes advantage of the presence of a dependency-based build system. The figure below illustrates the approach. Source trees contain lighweight "content links" referencing data in remote storage by hashes of their content. The ExtemalData module produces build rules to download the data to local stores and reference them from build trees by symbolic links (copies on Windows).<br />
<br />
A content link is a small, plain text file containing a hash of the real data. Its name is the same as its data file, with an additional extension identifying the hash algorithm e.g. img.png.md5. Content links always take the same (small) amount of space in the source tree regardless of the real data size. The CMakeLists.txt CMake configuration files refer to data using a DATA{} syntax inside calls to the ExternalData module API For example, DATA{img.png} tells the ExternalData module to make img.png available in the build tree even if only a img.png.md5 content link appears in the source tree.<br />
<br />
<<Figure 10.1: ExternalData module flow chart>><br />
<br />
The ExternalData module implements a flexible system to prevent duplication of content fetching and storage. Objects are retrieved from a list of (possibly redundant) local and remote locations specified in the ExtemalData CMake configuration as a list of "URL templates". The only requirement of remote storage systems is the ability to fetch from a URL that locates content through specification of the hash algorithm and hash value. Local or networked file systems, an Apache FTP server or a Midas<ref>http://www.midasplatform.org</ref> server, for example, all have this capability. Each URL template has %(algo) and %(hash) placeholders for ExternalData to replace with values from a content link.<br />
<br />
A persistent local object store can cache downloaded content to share among build trees by setting the ExternalData_OBJECT_STORES CMake build configuration variable. This is helpful to de-duplicate content for multiple build trees. It also resolves an important pragmatic concern in a regression testing context; when many machines simultaneously start a nightly dashboard build, they can use their local object store instead of overloading the data servers and flooding network traffic.<br />
<br />
Retrieval is integrated with a dependency-based build system, so resources are fetched only when needed. For example, if the system is used to retrieve testing data and BUILD_TESTING is OFF, the data are not retrieved unnecessarily. When the source tree is updated and a content link changes, the build system fetches the new data as needed.<br />
<br />
Since all references leaving the source tree go through hashes, they do not depend on any external state. Remote and local object stores can be relocated without invalidating content links in older versions of the source code. Content links within a source tree can be relocated or renamed without modifying the object stores. Duplicate content links can exist in a source tree, but download will only occur once. Multiple versions of data with the same source tree file name in a project's history are uniquely identified in the object stores.<br />
<br />
Hash-based systems allow the use of untrusted connections to remote resources because downloaded content is verified after it is retrieved. Configuration of the URL templates list improves robustness by allowing multiple redundant remote storage resources. Storage resources can also change over time on an as-needed basis. If a project's remote storage moves over time, a build of older source code versions is always possible by adjusting the URL templates configured for the build tree or by manually populating a local object store.<br />
<br />
A simple application of the ExternalData module looks like the following:<br />
<br />
<syntaxhighlight lang="text"><br />
include (ExternalData)<br />
set (midas "http://midas.kitware.com/MyProject")<br />
<br />
<br />
# Add standard remote object stores to user's<br />
# configuration.<br />
list (APPEND ExternalData_URL_ TEMPLATES<br />
"${midas} ?algorithm=%(algo)&hash=%(hash)"<br />
"ftp://myproject.org/files/%(algo)/%(hash)"<br />
)<br />
# Add a test referencing data.<br />
ExternalData_Add, Test (MyProjectData<br />
NAME SmoothingTest<br />
COMMAND SmoothingExe DATA{Input/Image.png}<br />
SmoothedImage.png<br />
)<br />
# Add a build target to populate the real data.<br />
ExternalData_Add_Target (MyProjectData)<br />
</syntaxhighlight><br />
<br />
The ExternalData_Add_Test function is a wrapper around CMake's add_test command. The source tree is probed for a Input/Image.png.md5 content link containing the data's MD5 hash. After checking the local object store, a request is made sequentially to each URL in the ExternalData_URL_TEMPLATES list with the data's hash. Once found, a symlink is created in the build tree. The DATA{Input/Image.png} path will expand to the build tree path in the test command line. Data are retrieved when the MyProjectData target is built.<br />
<br />
<br />
===Producing Test Dash boards===<br />
<br />
As your project's testing needs grow, keeping track of the test results can become overwhelming. This is especially true for projects that are tested nightly on a number of different platforms. In these cases, we recommend using a test dashboard to summarize the test results. (see Figure 10.2)<br />
<br />
A test dashboard summarizes the results for many tests on many platforms, and its hyperlinks allow people to drill down into additional levels of detail quickly. The CTest executable includes support for producing test dashboards. When run with the correct options, CTest will produce XML-based output recording the build and test results, and post them to a dashboard server. The dashboard server runs an open source software package called CDash. CDash collects the XML results and produces HTML web pages from them.<br />
<br />
Before discussing how to use CTest to produce a dashboard, let us consider the main parts of a testing dashboard. Each night at a specified time, the dashboard server will open up a new dashboard so each day there is a new web page showing the results of tests for that twenty-four hour period. There are links on the main page that allow you to quickly navigate through different days. Looking at the main page for a project (such as CMake's dashboard off of www.cmake.org), you will see that it is divided into a few main components. Near the top you will find a set of links that allow you to step to previous dashboards, as well as links to project pages such as the bug tracker, documentation, etc.<br />
<br />
<<Figure 10.2: Sample Testing Dashboard>><br />
<br />
Below that, you will find groups of results. Typically groups that you will find include Nightly, Experimental, Continuous, Coverage, and Dynamic Analysis (see Figure 10.3). The category into which a dashboard entry will be placed depends on how it was generated. The simplest are Experimental entries which represent dashboard results for someone's current copy of the project's source code. With an experimental dashboard, the source code is not guaranteed to be up to date. In contrast a Nightly dashboard entry is one where CTest tries to update the source code to a specific date and time. The expectation is that al l nightly dashboard entries for a given day should be based on the same source code.<br />
<br />
<<Figure 10.3: Experimental, Coverage, and Dynamic Analysis Results>><br />
<br />
A continuous dashboard entry is one that is designed to run every time new files are checked in. Depending on how frequently new files are checked in a single day's dashboard could have many continuous entries. Continuous dashboards are particularly helpful for cross platform projects where a problem may only show up on some platforms. In those cases a developer can commit a change that works for them on their platform and then another platform running a continuous build could catch the error, allowing the developer to correct the problem promptly.<br />
<br />
Dynamic Analysis and Coverage dashboards are designed to test the memory safety and code coverage of a project. A Dynamic Analysis dashboard entry is one where all the tests are run with a memory access/leak checking program enabled. Any resulting errors or warnings are parsed, summarized and displayed. This is important to verify that your software is not leaking memory, or reading from uninitialized memory. Coverage dashboard entries are similar in that all the tests are run, but as they are the lines of code being executed are tracked. When all the tests have been run, a listing of how many times each line of code was executed is produced and displayed on the dashboard.<br />
<br />
<br />
====Adding CDash Dashboard Support to a Project====<br />
<br />
In this section we show how to submit results to the CDash dashboard. You can either use the Kitware CDash servers at my.cdash.org or you can setup your own CDash server as described in section 10.11. If you are using my.cdash.org, you can click on the "Start My Project" button which will ask you to create an account (or login if you already have one), and then bring you to a page to start creating your project. If you have installed your own CDash server, then you should login to your CDash server as administrator and select "Create New Project" from the administration panel. Regardless of which approach you use, the next few steps will be to fill in information about your project as shown in Figure 10.4. Many of the items below are optional, so do not be concerned if you do not have a value for them; just leave them empty if they don't apply.<br />
<br />
<<Figure 10.4: Creating a new project in CDash>><br />
<br />
'''Name:''' what you want to call the project.<br />
<br />
'''Description:''' description of the project to be shown on the first page.<br />
<br />
'''Home URL:''' home URL of the project to appear in the main menu of the dashboard.<br />
<br />
'''Bug Tracker URL :''' URL to the bug tracker. Currently CDash supports Mantis<ref>http://www.mantisbt.org/</ref>, and if a bug is entered in the repository with the message "BUG: 132456", CDash will automatically link to the appropriate bug.<br />
<br />
'''Documentation URL:''' URL to where the project's documentation is kept. This will appear in the main menu of the dashboard.<br />
<br />
'''Public Dashboard:''' if checked, the dashboard is public and anybody can see the results of the dash board. If unchecked, only users assigned to this project can access the dashboard.<br />
<br />
'''Logo:''' logo of the project to be displayed on the main dashboard. Optimal size for a logo is 100x100 pixels. Transparent GIFs work best as they can blend in with the CDash background.<br />
<br />
'''Repository Viewer URL:''' URL of the web repository browser. CDash currently supports: ViewCVS, Trac, Fisheye, ViewVC, WebSVN, Loggerhead, GitHub, gitweb, hgweb, and others. Some example URLs are: * http://public.kitware.com/cgi-bin/viewcvs.cgi/?cvsroot=CMake (for ViewVC) * https://www.kitware.com/websvn/listing.php?repname=MyRepository (for WebSVN)<br />
<br />
'''Repositories:''' in order to display the daily updates, CDash gets a diff version of the modified files. Current CDash supports only anonymous repository access. A typical URL is :pserver:anoncvs@myproject.org:/cvsroot/MyProject.<br />
<br />
'''Nightly Start Time:''' CDash displays the current dashboard using a 24 hour window. The nightly start time defines the beginning of this window. Note that the start time is expressed in the form HH:MM:SS TZ, e.g. 01:00:00 UTC. It is recommended to express the nightly start time in UTC to keep operations runnjng smoothly across the boundaries of local time changes, like moving to or from daylight saving time.<br />
<br />
'''Coverage Threshold:''' CDash marks that coverage has passed (green) if the global coverage for a build or specific files is above this threshold. It is recommended to set the coverage threshold to a high value and decrease it as you focus on improving your coverage. Enable Test Timing: enable/disable test timing for this project. See "Test timing" in the next section for more information.<br />
<br />
'''Test Time Standard Deviation:''' set a multiplier for the standard deviation of a test time. If the time for a test is higher than the mean + multiplier * standard deviation, the test time status is marked as failed. The default value is 4 if not specified. Note that changing this value does not affect previous builds; only builds submitted after the modification .<br />
<br />
'''Test Time Standard Deviation Threshold:''' set a minimum standard deviation for a test time. If the current standard deviation for a test is lower than this threshold, then the threshold is used instead. This is particularly important for tests that have a very low standard deviation, but still some variability. The default threshold is set to 2 if not specified. Note that changing this value does not affect previous builds, only builds submitted after the modification.<br />
<br />
'''Test Time # Max Failures Before Flag:''' some tests might take longer from one day to another depending on the client machine load. This variable defines the number of times a test should fail because of timing issues before being flagged.<br />
<br />
'''Email Submission Failures:''' enable/disable sending email when a build fails (configure, error, warnings,<br />
update, and test failings) for this project. This is a general feature.<br />
<br />
<br />
'''Email Redundant Failure:''' by default CDash does not send email for the same failures. For instance, if a build continues to fail over time, only one email would be sent. If the email redundant failures is checked, then CDash will send an email every time a build has a failure. CDash.<br />
<br />
'''Email Build Missing:''' enable/disable sending email when a build has not submitted. Email Low Coverage: enable/disable sending email when the coverage for files is lower than the default threshold value specified above.<br />
<br />
'''Email Test Timing Changed:''' enable/disable sending email when a test's timing has changed. Maximum Number of ltems in Email: dictates how many failures should be sent in an email. Maximum Number of Characters in Email: dictates how many characters from the log should be sent in the email.<br />
<br />
'''Google Analytics Tracker:''' CDash supports visitor tracking through Google analytics. See "Adding Google Analytics" for more information.<br />
<br />
'''Show Site IP Addresses:''' enable/disable the display of IP addresses of the sites submitting to this project. Display Labels: as of CDash 1.4, and CTest 2.8, labels can be attached to various build and test results. If checked, these labels are displayed on applicable CDash pages.<br />
<br />
'''AutoRemove Timeframe:''' set the number of days to retain results for this project. If the timeframe is less than 2 days, CDash will not remove any builds. AutoRemove Max Builds: set the maximum number of builds to remove when performing the auto removal of builds.<br />
<br />
<br />
After providing this information, you can click on "Create Project" to create the project in CDash. At this point the server is ready to accept dashboard submissions. The next step is to provide the dashboard server information to your software project. This information is kept in a file named CTestConfg.cmake at the top level of your source tree. You can download this file by clicking on the "Edit Project" button for your dashboard (it looks like a pie chart with a wrench underneath it), then clicking on the miscellaneous tab and selecting "Download CTestConfig", and then saving the CTestConfig.cmake in your source tree. In the next section, we review this file in more detail .<br />
<br />
<br />
====Client Setup====<br />
<br />
To support dashboards in your project you need to include the CTest module as follows.<br />
<br />
<syntaxhighlight lang="text"><br />
# Include CDash dashboard testing module<br />
include (CTest)<br />
</syntaxhighlight><br />
<br />
The CTest module will then read settings from the CTestConfig.cmake file you downloaded from CDash. If you have added add_test() (page 277) command calls to your project creating a dashboard entry is as simple as running:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -D Experimental<br />
</syntaxhighlight><br />
<br />
The -D option tells CTest to create a dashboard entry. The next argument indicates what type of dashboard entry to create. Creating a dashboard entry involves quite a few steps that can be run independently, or as one command. In this example, the Experimental argument will cause CTest to perform a number of different steps as one command. The different steps of creating a dashboard entry are summarized below.<br />
<br />
'''Start''' Prepare a new dashboard entry. This creates a Testing subdirectory in the build directory. The Testing subdirectory will contain a subdirectory for the dashboard results with a name that corresponds to the dashboard time. The Testing subdirectory will also contain a subdirectory for the temporary testing results called Temporary.<br />
<br />
'''Update''' Perform a source control update of the source code (typically used for nightly or continuous runs). Currently CTest supports Concurrent Versions System (CVS), Subversion, Git, Mercurial, and Bazaar.<br />
<br />
'''Configure''' Run CMake on the project to make sure the Makefiles or project files are up to date.<br />
<br />
'''Build''' Build the software using the specified generator.<br />
<br />
'''Test''' Run all the tests and record the results.<br />
<br />
'''MemoryCheck''' Perform memory checks using Purify or valgrind.<br />
<br />
'''Coverage''' Collect source code coverage information using gcov or Bullseye.<br />
<br />
'''Submit''' Submit the testing results as a dashboard entry to the server.<br />
<br />
Each of these steps can be run independently for a Nightly or Experimental entry using the following syntax:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -D NightlyStart<br />
ctest -D NightlyBuild<br />
ctest -D NightlyCoverge -D NightlySubmit<br />
</syntaxhighlight><br />
<br />
or<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -D ExperimentalStart<br />
ctest -D ExperimentalConfigure<br />
ctest -D ExperimentalCoverge -D Experimentalsubmit<br />
</syntaxhighlight><br />
<br />
Alternatively, you can use shortcuts that perform the most common combinations all at once. The shortcuts that CTest has defined include:<br />
<br />
'''ctest -D Experimental''' performs the start, configure, build, test, coverage, and submit commands.<br />
<br />
'''ctest -D Nightly''' performs the start, update, configure, build, test, coverage, and submit commands.<br />
<br />
'''ctest -D Continuous''' performs the start, update, configure, build, test, coverage, and submit commands.<br />
<br />
'''ctest -D MemoryCheck''' performs the start, configure, build, memorycheck, coverage, and submit commands.<br />
<br />
When first setting up a dashboard it is often useful to combine the -D option with the -V option. This will allow you to see the output of all the different stages of the dashboard process. Likewise, CTest maintains log files in the Testing/Temporary directory it creates in your binary tree. There you will find log files for the most recent dashboard run. The dashboard results (XML files) are stored in the Testing directory as well.<br />
<br />
<br />
===Customizing Dash boards for a Project===<br />
<br />
CTest has a few options that can be used to control how it processes a project. If, when CTest runs a dash board, it finds CTestCustom.ctest files in the binary tree, it will load these files and use the settings from them to control its behavior. The syntax of a CTestCustom file is the same as regular CMake syntax. That said, only set commands are normally used in this file. These commands specify properties that CTest will consider when performing the testing.<br />
<br />
<br />
====Dashboard Submissions Settings====<br />
<br />
A number of the basic dashboard settings are provided in the file that you download from CDash. You can edit these initial values and provide additional values if you wish. The first value that is set is the nightly start time. This is the time that dashboards all around the world will use for checking out their copy of the nightly source code. This time also controls how dashboard submissions will be grouped together. All submissions from the nightly start time until the next nightly start time will be included on the same "day".<br />
<br />
<syntaxhighlight lang="text"><br />
# Dashboard is opened for submissions for a 24 hour period<br />
# starting at the specified NIGHTLY_START_TIME. Time is<br />
# specified in 24 hour format.<br />
set (CTEST_NIGHTLY_START_TIME "01:00:00 UTC")<br />
</syntaxhighlight><br />
<br />
The next group of settings control where to submit the testing results. This is the location of the CDash server.<br />
<br />
<syntaxhighlight lang="text"><br />
# CDash server to submit results (used by client)<br />
set (CTEST_DROP_METHOD http)<br />
set (CTEST_DROP_SITE "my.cdash.org")<br />
set (CTEST_DROP_LOCATION "/submit .php?project=KensTest")<br />
set (CTEST_DROP_SITE_CDASH TRUE)<br />
</syntaxhighlight><br />
<br />
The CTEST_DROP_SITE (page 678) specifies the location of the CDash server. Build and test results generated by CDash clients are sent to this location. The CTEST_DROP_LOCATION (page 678) is the directory or the HTTP URL on the server where CDash clients leave their build and test reports. The CTEST_DROP_SITE_CDASH (page 678) specifies that the current server is CDash, which prevents CTest from trying to "trigger" the submission (this is still done if this variable is not set to allow for backwards compatibility with Dart and Dart 2).<br />
<br />
Currently CDash supports only the HTTP drop submission method; however CTest supports other submission types. The CTEST_DROP_METHOD (page 678) specifies the method used to submit testing results. The most common setting for this will be HTTP which uses the Hyper Text Transfer Protocol (HTTP) to transfer the test data to the server. Other drop methods are supported for special cases such as FTP and SCP. In the example below, clients that are submitting their results using the HTTP protocol use a web address as their drop site. If the submission is via FTP, this location is relative to where the CTEST_DROP_SITE_USER (page 678) will log in b y default. The CTEST_DROP_SITE_USER specifies the FTP username the client will use on the server. For FTP submissions this user will typically be "anonymous". However, any username that can communicate with the server can be used. For FTP servers that require a password, it can be stored in the CTEST_DROP_SITE_PASSWORD (page 678) variable. The CTEST_DROP_SITE_MODE (not used in this example) is an optional variable that you can use to specify the FTP mode. Most FTP servers will handle the default passive mode, but you can set the mode explicitly to active if your server does not.<br />
<br />
CTest can also be run from behind a firewall. If the firewall allows FTP or HITP traffic, then no additional settings are required. If the firewall requires an FTP/HITP proxy or uses a SOCKS4 or SOCKS5 type proxy, some environment variables need to be set. HTTP_PROXY and FTP_PROXY specify the servers that service HTTP and FTP proxy requests. HTTP_PROXY_PORT and FTP_PROXY_PORT specify the port on which the HTTP and FTP proxies reside. HTTP_PROXY_TYPE specifies the type of the HITP proxy used. The three different types of proxies supported are the default, which incl udes a generic HTTP/FTP proxy, "SOCKS4", and "SOCKS5", which specify SOCKS4 and SOCKS5 compatible proxies.<br />
<br />
<br />
====Filtering Errors and Warnings====<br />
<br />
By default, CTest has a list of regular expressions that it matches for finding the errors and warnings from the output of the build process. You can override these settings in your CTestCustom.ctest files using several variables as shown below.<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_CUSTOM_WARNING_MATCH<br />
${CTEST_CUSTOM_WARNING_ MATCH}<br />
"{standard input}:[0-9][0-9]*: Warning: "<br />
)<br />
<br />
set (CTEST_CUSTOM_WARNING_EXCEPTION<br />
${CTEST_CUSTOM_WARNING_EXCEPTION}<br />
"tk8.4.5/[^/]+/[^/]+.c[:\"]"<br />
"xtree.[0-9]+. : warning C4702: unreachable code"<br />
"warning LNK4221"<br />
"variable .var_args[2]*. is used before its value is set"<br />
"jobserver unavailable"<br />
)<br />
</syntaxhighlight><br />
<br />
Another useful feature of the CTestCustom files is that you can use it to limit the tests that are run for memory checking dashboards. Memory checking using purify or valgrind is a CPU intensive process that can take twenty hours for a dashboard that normally takes one hour. To help alleviate this problem, CTest allows you to exclude some of the tests from the memory checking process as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_CUSTOM_MEMCHECK_IGNORE<br />
${CTEST_CUSTOM_MEMCHECK_IGNORE}<br />
TestSetGet<br />
otherPrint-ParaView<br />
Example-vtkLocal<br />
Example-vtkMy<br />
)<br />
</syntaxhighlight><br />
<br />
The format for excluding tests is simply a list of test names as specified when the tests were added in your CMakeLists file with add_test() (page 277).<br />
<br />
In addition to the demonstrated settings, such as CTEST_CUSTOM_WARNING_MATCH, CTEST_CUSTOM_WARNING_EXCEPTION, and CTEST_CUSTOM_MEMCHECK_IGNORE, CTest also checks several other variables.<br />
<br />
'''CTEST_CUSTOM_ERROR_MATCH''' Additional regular expressions to consider a build line as an error line<br />
<br />
'''CTEST_CUSTOM_ERROR_EXCEPTION''' Additional regular expressions to consider a build line not as an error line<br />
<br />
'''CTEST_CUSTOM_WARNING_MATCH''' Additional regular expressions to consider a build line as a warning line<br />
<br />
'''CTEST_CUSTOM_WARNING_EXCEPTION''' Additional regular expressions to consider a build line not as a warning line<br />
<br />
'''CTEST_CUSTOM_MAXIMUM_NUMBER_OF_ERRORS''' Maximum number of errors before CTest stops reporting errors (default 50)<br />
<br />
'''CTEST_CUSTOM_MAXIMUM_NUMBER_OF_WARNINGS''' Maximum number of warnings before CTest stops reporting warnings (default 50)<br />
<br />
'''CTEST_CUSTOM_COVERAGE_EXCLUDE''' Regular expressions for files to be excluded from the coverage analysis<br />
<br />
'''CTEST_CUSTOM_PRE_MEMCHECK''' List of commands to execute before performing memory checking<br />
<br />
'''CTEST_CUSTOM_POST_MEMCHECK''' List of commands to execute after performing memory checking<br />
<br />
'''CTEST_CUSTOM_MEMCHECK_IGNORE''' List of tests to exclude from the memory checking step<br />
<br />
'''CTEST_CUSTOM_PRE_TEST''' List of commands to execute before performing testing<br />
<br />
'''CTEST_CUSTOM_POST_TEST''' List of commands to execute after performing testing<br />
<br />
'''CTEST_CUSTOM_TESTS_IGNORE''' List of tests to exclude from the testing step<br />
<br />
'''CTEST_CUSTOM_MAXIMUM_PASSED_TEST_OUTPUT_SIZE''' Maximum size of test output for the passed test (default 1k)<br />
<br />
'''CTEST_CUSTOM_MAXIMUM_FAILED_TEST_OUTPUT_SIZE''' Maximum size of test output for the failed test (default 300k)<br />
<br />
Commands specified in CTEST_CUSTOM_PRE_TEST and CTEST_CUSTOM_POST_TEST, as well as the<br />
equivalent memory checki ng ones, are executed once per CTest run. These commands can be used, for<br />
example, if al l tests require some initial setup and some final cleanup to be performed.<br />
<br />
<br />
====Adding Notes to a Dash board====<br />
<br />
CTest and CDash support adding note files to a dashboard submission. These will appear on the dashboard as a clickable icon that links to the text of all the files. To add notes, call CTest with the -A option followed by a semicolon-separated list of filenames. The contents of these files will be submitted as notes for the dashboard. For example:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -D Continous -A C:/MyNotes.txt;C:/OtherNotes.txt<br />
</syntaxhighlight><br />
<br />
Another way to submit notes with a dashboard is to copy or write the notes as files into a Notes directory under the Testing directory of your binary tree. Any files found there when CTest submits a dashboard will also be uploaded as notes.<br />
<br />
<br />
===Setting up Automated Dashboard Clients===<br />
<br />
'''IMPORTANT:''' This section is obsolete, and left in only for reference. To setup new dashboards, please skip ahead to the next section, and write an "advanced ctest script" instead of following the directions in this section.<br />
<br />
CTest has a built-in scripting mode to help make the process of setting up dashboard clients even easier. CTest scripts will handle most of the common tasks and options that CTest -D Nightly does not. The dashboard script is written using CMake syntax and mainly involves setting up different variables or options, or creating an elaborate procedure, depending on the complexity of testing. Once you have written the script you can run the nightly dashboard as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -S myScript.cmake<br />
</syntaxhighlight><br />
<br />
First we will consider the most basic script you can use, and then we will cover the different options you can make use of. There are four variables that you must always set in your scripts. The first two variables are the names of the source and binary directories on disk, CTEST_SOURCE_DIRECTORY (page 680) and CTEST_BINARY_DIRECTORY (page 675). These should be fully specified paths. The next variable, CTEST_COMMAND, specifies which CTest command to use for running the dashboard. This may seem a bit confusing at first. The -S option of CTest is provided to do all the setup and customization for a dashboard, but the actual running of the dashboard is done with another invocation of CTest -D. Basically once the CTest script has done what it needs to do to setup the dashboard, it invokes CTest -D to actually generate the results. You can adjust the value of CTEST_COMMAND to control what type of dashboard to generate (Nightly, Experimental, Continuous), as well as to pass other options to the internal CTest process such as -I,,7 to run every 7th test. To refer to the CTest that is running the script, use the variable: CTEST_EXECUTABLE_NAME. The last required variable is CTEST_CMAKE_COMMAND, which specifies the full path to the cmake executable that will be used to configure the dashboard. To refer to the CMake command that corresponds to the CTest command running the script, use the variable: CMAKE_EXECUTABLE_NAME. The CTest script does an initial configuration with cmake in order to generate the CTestConfig. cmake file that CTest will use for the dashboard. The fol lowing example demonstrates the use of these four variables and is an example of the simplest script you can have.<br />
<br />
<syntaxhighlight lang="text"><br />
# these are the source and binary directories on disk<br />
set (CTEST_SOURCE_DIRECTORY C:/martink/test/CMake)<br />
set (CTEST_BINARY_DIRECTORY C:/martink/test/CMakeBin)<br />
<br />
# which CTest command to use for running the dashboard<br />
set (CTEST COMMAND<br />
"\"${CTEST_EXECUTABLE_NAME}\" -D Nightly"<br />
)<br />
<br />
# what CMake command to use for configuring this dashboard<br />
set (CTEST_CMAKE COMMAND<br />
"\"${CMAKE_EXECUTABLE_NAME}\""<br />
)<br />
</syntaxhighlight><br />
<br />
The script above is not that different to running CTest -D from the command line yourself. All it adds is that it verifies that the binary directory exists and creates it if it does not. Where CTest scripting really shines is in the optional features it supports. We will consider these options one by one, starting with one of the most commonly used options CTEST_START_WITH_EMPTY_BINARY_DIRECTORY. When this variable is set to true, it will delete the binary directory and then recreate it as an empty directory prior to running the dashboard. This guarantees that you are testing a clean build every time the dashboard is run. To use this option you simply set it in your script. In the example above we would simply add the following lines:<br />
<br />
<syntaxhighlight lang="text"><br />
# should CTest wipe the binary tree before running<br />
set (CTEST_START_WITH_EMPTY_BINARY_DIRECTORY TRUE)<br />
</syntaxhighlight><br />
<br />
Another commonly used option is the CTEST_INITIAL_CACHE variable. Whatever values you set this to will be written into the CMakeCache file prior to running the dashboard. This is an effective and simple way to initialize a cache with some preset values. The syntax is the same as what is in the cache with the exception that you must escape any quotes. Consider the following example:<br />
<br />
<syntaxhighlight lang="text"><br />
# this is the initial cache to use for the binary tree, be<br />
# careful to escape any quotes inside of this string<br />
set (CTEST_INITIAL_CACHE "<br />
<br />
//Command used to build entire project from the command line.<br />
MAKECOMMAND:STRING=\"devenv.com\" CMake.sln /build Debug /project ALL_BUILD<br />
<br />
//make program<br />
CMAKE_MAKE_PROGRAM:FILEPATH=C:/PROGRA~1/MICROS~1.NET/Common7/IDE/devenv.com<br />
<br />
//Name of generator.<br />
CMAKE_GENERATOR:INTERNAL=Visual Studio 7 .NET 2003<br />
<br />
//Path to a program.<br />
CVSCOMMAND:FILEPATH=C:/cygwin/bin/cvs.exe<br />
<br />
//Name of the build<br />
BUILDNAME:STRING=Win32-vs71<br />
<br />
//Name of the computer/site where compile is being run<br />
SITE:STRING=DASH1.kitware<br />
<br />
")<br />
</syntaxhighlight><br />
<br />
Note that the above code is basically just one set() (page 330) command setting the value of CTEST_INITIAL_CACHE to a multiline string value. For Windows builds, these are the most common cache entries that need to be set prior to running the dashboard. The first three values control what compiler will be used to build this dashboard (Visual Studio 7.1 in this example). CVSCOMMAND might be found automatically, but if not it can be set here. The last two cache entries are the names that will be used to identify this dashboard submission on the dashboard.<br />
<br />
The next two variables work together to support additional directories and projects. For example, imagine that you had a separate data directory that you needed to keep up-to-date with your source directory. Setting the variables CTEST_CVS_COMMAND (page 677) and CTEST_EXTRA_UPDATES_1 tells CTest to perform a cvs update on the specified directory, with the specified arguments prior to running the dashboard. For example:<br />
<br />
<syntaxhighlight lang="text"><br />
# what cvs command to use for configuring this dashboard<br />
set (CTEST_CVS_COMMAND "C:/cygwin/bin/cvs.exe"<br />
<br />
# set any extra directories to do an update on<br />
set (CTEST_EXTRA_UPDATES_1<br />
"C:/Dashboards/My Tests/VTKData" "-dAP")<br />
</syntaxhighlight><br />
<br />
If you have more than one directory that needs to be updated you can use CTEST_EXTRA_UPDATES_2 through CTEST_EXTRA_UPDATES_9 in the same manner. The next variable you can set is called CTEST_ENVIRONMENT. This variable consolidates several set commands into a single command. Setting this variable allows you to set environment variables that will be used by the process running the dashboards. You can set as many environment variables as you want using the syntax shown below.<br />
<br />
<syntaxhighlight lang="text"><br />
# set any extra environment variables here<br />
set (CTEST_ENVIRONMENT<br />
*DISPLAY=:0"<br />
"USE_GCC_MALLOC=1"<br />
)<br />
# is the same as<br />
set (ENV{DISPLAY} 7:0")<br />
set (ENV{USE_GCC_MALLOC} "1")<br />
</syntaxhighlight><br />
<br />
The final general purpose option we will discuss is CTest's support for restoring a bad dashboard. In some cases, you might want to make sure that you always have a working build of the software. In other instances, you might use the resulting executables or libraries from one dashboard in the build process of another dashboard. If the first dashboard fails in either of these situations, it is best to drop back to the last previously working dashboard. You can do this i n CTest by setting CTEST_BACKUP_AND_RESTORE to true. When this is set to true, CTest will first backup the source and binary directories. It will then check out a new source directory and create a new binary directory. After that, it will run a full dashboard. If the dashboard is successful the backup directories are removed, if for some reason the new dashboard fails the new directories will be removed and the old directories restored. To make this work, you must also set the CTEST_CVS_CHECKOUT (page 677) variable. This should be set to the command required to check out your source tree. This doesn't actually have to be cvs, but it must result in a source tree in the correct location. Consider the following example:<br />
<br />
<syntaxhighlight lang="text"><br />
# do a backup and should the build fail restore,<br />
# if this is true you must set the CTEST_CVS_CHECKOUT<br />
# variable below.<br />
set (CTEST_BACKUP_AND_RESTORE TRUE)<br />
<br />
# this is the full cvs command to checkout the source dir<br />
# this will be run from the directory above the source dir<br />
set (CTEST_CVS_CHECKOUT<br />
"/usr/bin/cvs -d /cvsroot/FOO co -d FOO FOO"<br />
)<br />
</syntaxhighlight><br />
<br />
Note that whatever checkout command you specify will be run from the directory above the source directory. A typical nightly dashboard client script will look like this:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_SOURCE_NAME CMake)<br />
set (CTEST_BINARY_NAME CMake-gcc)<br />
set (CTEST_DASHBOARD_ROOT "$ENV{HOME}/Dashboards/My Tests")<br />
<br />
set (CTEST_SOURCE_DIRECTORY<br />
"${CTEST_DASHBOARD_ROOT}/${CTEST_SOURCE_NAME}")<br />
set (CTEST_BINARY_DIRECTORY<br />
"${CTEST_DASHBOARD_ROOT}/${CTEST_BINARY_NAME}")<br />
<br />
# which ctest command to use for running the dashboard<br />
set (CTEST_COMMAND<br />
"\"S (CTEST_EXECUTABLE_NAME} \"<br />
-D Nightly<br />
-A \"${CTEST_SCRIPT_DIRECTORY}/${CTEST_SCRIPT_NAME}\"")<br />
<br />
# what CMake command to use for configuring this dashboard<br />
set (CTEST_CMAKE_COMMAND "\"${CMAKE_EXECUTABLE_NAME}\"")<br />
<br />
# should ctest wipe the binary tree before running<br />
set (CTEST_START_WITH_EMPTY_BINARY_DIRECTORY TRUE)<br />
# this is the initial cache to use for the binary tree<br />
set (CTEST_INITIAL_CACHE *<br />
SITE: STRING=midworld.kitware<br />
BUILDNAME:STRING=DarwinG5-g++<br />
MAKECOMMAND:STRING=make -i -j2<br />
")<br />
<br />
# set any extra environment variables here<br />
set (CTEST_ENVIRONMENT<br />
"CC=gcc"<br />
"CXX=g++"<br />
)<br />
</syntaxhighlight><br />
<br />
<br />
====Settings for Continuous Dash boards====<br />
<br />
The next three variables are used for setting up continuous dashboards. As mentioned earlier a continuous<br />
dashboard is designed to run continuously throughout the day, providing quick feedback on the state of the<br />
software. If you are doing a continuous dashboard you can use CTEST_CONTINUOUS_DURATION and<br />
CTEST_CONTINUOUS_MINIMUM_INTERVAL to run the continuous repeatedly. The duration controls<br />
how long the script should run continuous dashboards, and the minimum interval specifies the shortest al<br />
lowed time between continuous dashboards. For example, say that you want to run a continuous dashboard<br />
from 9AM until 7PM and that you want no more than one dashboard every twenty minutes. To do thi s you<br />
would set the duration to 600 minutes (ten hours) and the minimum interval to 20 m inutes. If you run the<br />
test script at 9AM it w i l l start a continuous dashboard. When that dashboard finishes it w i l l check to see<br />
how much time has elapsed. If less than 20 minutes has elapsed CTest will sleep until the 20 m i nutes are up.<br />
If 20 or more minutes have elapsed then it w i l l immediately start another continuous dashboard. Do not be<br />
concerned that you w i l l end up with 30 dashboards a day (IO hours* three times an hour) . If there have been<br />
no changes to the source code, CTest will not build and submit a dashboard. It w i l l i nstead start waiting until<br />
the next interval is up and then check again. Using this feature just involves setting the following variables to<br />
the values you desire.<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_CONTINUOUS_DURATION 600)<br />
set (CTEST_CONTINUOUS_MINIMUM_INTERVAL 20)<br />
</syntaxhighlight><br />
<br />
Earlier, we introduced the CTEST_START_WITH_EMPTY_BINARY_DIRECTORY variable that can be set to start the dashboards with an empty binary directory. If this is set to true for a continuous dashboard then every continuous where there has been a change in the source code will result in a complete build from scratch. For larger projects this can significantly limit the number of continuous dashboards that can be generated in a day, while not using it can result in build errors or omissions because it is not a clean build. Fortunately there is a compromise: if you set CTEST_START_WITH_EMPTY_BINARY_DIRECTORY_ONCE to true, CTest will start with a clean binary directory for the first continuous build but not subsequent ones. Based on your settings for the duration this is an easy way to start with a clean build every morn ing, but use existing builds for the rest of the day.<br />
<br />
Another helpful feature to use with a continuous dashboard is the -I option. A large project may have so many tests that running all the tests limits how frequently a continuous dashboard can be generated. By adding -I,,7 (or -I,,5 etc) to the CTEST_COMMAND value, the continuous dashboard will only run every seventh test, significantly reducing the time required between continuous dashboards. For example:<br />
<br />
<syntaxhighlight lang="text"><br />
# these are the names of the source and binary directories<br />
set (CTEST_SOURCE_NAME CMake-cont)<br />
set (CTEST_BINARY_NAME CMakeBCC-cont)<br />
set (CTEST_DASHBOARD_ROOT "c:/Dashboards/My Tests")<br />
set (CTEST_SOURCE_DIRECTORY<br />
"${CTEST_DASHBOARD_ROOT}/${CTEST_SOURCE_NAME}")<br />
set (CTEST_BINARY_DIRECTORY<br />
"${CTEST_DASHBOARD_ROOT}/${CTEST_BINARY_NAME}")<br />
<br />
# which ctest command to use for running the dashboard<br />
set (CTEST_COMMAND<br />
"\"${CTEST_EXECUTABLE_NAME} \"<br />
-D Continuous<br />
-A \"${CTEST_SCRIPT_DIRECTORY}/${CTEST_SCRIPT_NAME}\"")<br />
<br />
# what CMake command to use for configuring this dashboard<br />
set (CTEST_CMAKE_COMMAND "\"${CMAKE_EXECUTABLE_NAME}\"")<br />
<br />
# this is the initial cache to use for the binary tree<br />
set (CTEST_INITIAL_CACHE "<br />
SITE:STRING=dash14.kitware<br />
BUILDNAME:STRING=Win32-bcc5.6<br />
CMAKE_GENERATOR:INTERNAL=Borland Makefiles<br />
CVSCOMMAND:FILEPATH=C:/Program Files/TortoiseCVS/cvs.exe<br />
CMAKE_CXX_FLAGS:STRING=-w- -whid -waus -wpar -tWM<br />
CMAKE_C_FLAGS:STRING=-w- -whid -waus -tWM<br />
")<br />
<br />
# set any extra environment variables here<br />
set (ENV{PATH} "C:/Program Files/Borland/CBuilder6/Bin\;<br />
C:/Program Files/Borland/CBuilder6/Projects/Bpl"<br />
)<br />
</syntaxhighlight><br />
<br />
<br />
====Variables Ava i lable i n CTest Scri pts====<br />
<br />
There are a few variables that will be set before your script executes. The first two variables are the directory the script is in, CTEST_SCRIPT_DIRECTORY, and name of the script itself CTEST_SCRIPT_NAME. These two variables can be used to make your scripts more portable. For example, if you wanted to include the script itself as a note for the dashboard you could do the following:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_COMMAND<br />
"\"${CTEST_EXECUTABLE_NAME}\" -D Continuous<br />
-A \"${CTEST_SCRIPT_DIRECTORY}/${CTEST_SCRIPT_NAME}\""<br />
)<br />
</syntaxhighlight><br />
<br />
Another variable you can use is CTEST_SCRIPT_ARG. This variable can be set by providing a comma separated argument after the script name when invoking CTest -S. For example CTest -s foo.cmake, 21 would result in CTEST_SCRIPT_ARG being set to 21.<br />
<br />
<br />
====Limitations of Traditional CTest Scripting====<br />
<br />
The traditional CTest scripting described in this section has some limitations. The first is that the dashboard will always fail if the Configure step fails. The reason for that is that the input files for CTest are actually generated by the Configure step. To make things worse, the update step will not happen and the dashboard will be stuck. To prevent this, an additional update step is necessary. This can be ach ieved by adding CTEST_EXTRA_UPDATES_1 variable with "-D yesterday" or similar flag. This will update the repository prior to doing a dashboard. Since it will update to yesterday's time stamp, the actual update step of CTest will find the files that were modified since the previous day.<br />
<br />
The second limitation of traditional CTest scripting is that it is not actually scripting. We only have control over what happens before the actual CTest run, but not what happens during or after. For example, if we want to run the testing and then move the binaries somewhere, or if we want to build the project, do some extra tasks and then run tests or something similar, we need to perform several complicated tasks, such as run CMake with -P option as a part of CTEST_COMMAND.<br />
<br />
<br />
===Advanced CTest Scripting===<br />
<br />
The CTest scripting described in the previous section is still valid and will still work. This section describes how to write command-based CTest scripts that allow the maintainer to have much more fine-grained control over the individual steps of a dashboard.<br />
<br />
<br />
====Extended CTest Scripting====<br />
<br />
To overcome the limitations of traditional CTest scripting, CTest provides an extended scripting mode. In this mode, the dashboard maintainer has access to individual CTest command functions, such as ctest_configure and ctest_build. By running these functions individually, the user can flexibly develop custom testing schemes. Here's an example of an extended CTest script<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required (VERSION 2.2)<br />
<br />
set (CTEST_SITE "andoria.kitware")<br />
set (CTEST_BUILD_NAME "Linux-g++")<br />
set (CTEST_NOTES FILES<br />
"$(CTEST_SCRIPT_DIRECTORY}/${CTEST_SCRIPT_NAME}")<br />
<br />
set (CTEST_DASHBOARD_ROOT "$ENV{HOME}/Dashboards/My Tests")<br />
set (CTEST_SOURCE_DIRECTORY "${CTEST_DASHBOARD_ROOT}/CMake")<br />
set (CTEST_BINARY_DIRECTORY "${CTEST_DASHBOARD_ROOT}/CMake-gcc")<br />
<br />
set (CTEST_UPDATE_COMMAND "/usr/bin/cvs")<br />
set (CTEST_CONFIGURE_COMMAND<br />
"\"$({CTEST_SOURCE_DIRECTORY}/bootstrap\"")<br />
set (CTEST_BUILD_COMMAND "/usr/bin/make -j 2")<br />
<br />
ctest_empty_binary_directory (${CTEST_BINARY_DIRECTORY})<br />
<br />
ctest_start (Nightly)<br />
ctest_update (SOURCE "${CTEST_SOURCE_DIRECTORY}")<br />
ctest_configure (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_build (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_test (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_submit ()<br />
</syntaxhighlight><br />
<br />
The first line is there to make sure an appropriate version of CTest is used. The advanced scripting was introduced in CTest 2.2. The CMake parser is used, and so all scriptable commands from CMake are available. This includes the CMake_minimum_required command:<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required (VERSION 2.2)<br />
</syntaxhighlight><br />
<br />
Overall, the layout of the rest of this script is similar to a traditional one. There are several settings that CTest will use to perform its tasks. Then, unlike with traditional CTest, there are the actual tasks that CTest will perform. Instead of providing information in the project's CMake cache, in this scripting mode all the information is provided to CTest. For compatibility reasons we may choose to write the information to the cache, but that is up to the dashboard maintainer. The first block contains the variables about the submission.<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_SITE "andoria.kitware")<br />
set (CTEST_BUILD NAME *Linux-g++")<br />
set (CTEST_NOTES_FILES<br />
"${CTEST_SCRIPT_DIRECTORY}/${CTEST_SCRIPT_NAME}")<br />
</syntaxhighlight><br />
<br />
These variables serve the same role as the SITE and BUILD_NAME cache variables. They are used to identify the system once it submits the results to the dashboard. CTEST_NOTES_FILES is a list of files that should be submitted as the notes of the dashboard submission. This variable corresponds to the -A flag of CTest.<br />
<br />
The second block describes the information that CTest functions will use to perform the tasks:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_DASHBOARD_ROOT "$ENV{HOME}/Dashboards/My Tests")<br />
set (CTEST_SOURCE_DIRECTORY "${CTEST_DASHBOARD_ROOT}/CMake")<br />
set (CTEST_BINARY_DIRECTORY "${CTEST_DASHBOARD_ROOT}/CMake-gcc ")<br />
set (CTEST_UPDATE_COMMAND "/usr/bin/cvs")<br />
set (CTEST_CONFIGURE_COMMAND<br />
"\"${CTEST_SOURCE_DIRECTORY}/bootstrap\"")<br />
set (CTEST_BUILD_COMMAND "/usr/bin/make -j 2")<br />
</syntaxhighlight><br />
<br />
The CTEST_SOURCE_DIRECTORY and CTEST_BINARY_DIRECTORY serve the same purpose as in the traditional CTest script. The only difference is that we will be able to override these variables later on when calling the CTest functions, if necessary. The CTEST_UPDATE_COMMAND is the path to the command used to update the source directory from the repository. Currently CTest supports Concurrent Versions System (CVS), Subversion, Git, Mercurial, and Bazaar.<br />
<br />
Both the configure and build handlers support two modes. One mode is to provide the full command that will be invoked during that stage. This is designed to support projects that do not use CMake as their configuration or build tool . In this case, you specify the full command lines to configure and build your project by setting the CTEST_CONFIGURE_COMMAND and CTEST_BUILD_COMMAND variables respectively. This is similar to specifying CTEST_CMAKE_COMMAND in the traditional CTest scripting.<br />
<br />
For projects that use CMake for their configuration and build steps you do not need to specify the command<br />
lines for configuring and building your project. Instead, you will specify the CMake generator to use by setting the CTE ST_CMAKE_GENE RATOR variable. This way CMake will be run with the appropriate generator.<br />
One example of this is:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_CMAKE_GENERATOR "Visual Studio 8 2005")<br />
</syntaxhighlight><br />
<br />
For the build step you should also set the variables CTEST_PROJECT_NAME and CTEST_BUILD_CONFIGURATION, to specify how to build the project. In this case CTEST_PROJECT_NAME will match the top level CMakeLists file's PROJECT command, and therefore also match the name of the generated Visual Studio *.sln file. The CTEST_BUILD_CONFIGURATION should be one of Release, Debug, MinSizeRel, or RelWithDeblnfo. Additionally, CTEST_BUILD_FLAGS can be provided as a hint to the build command. An example of testing for a CMake based project would be:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_CMAKE GENERATOR "Visual Studio 8 2005")<br />
set (CTEST_PROJECT_NAME "Grommit")<br />
set (CTEST_BUILD_CONFIGURATION "Debug")<br />
</syntaxhighlight><br />
<br />
The final block performs the actual testing and submission:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest_empty_binary directory (${CTEST_BINARY_DIRECTORY})<br />
<br />
ctest_start (Nightly)<br />
<br />
ctest_update (SOURCE<br />
"${CTEST_SOURCE_DIRECTORY}" RETURN_VALUE res)<br />
ctest_configure (BUILD<br />
"${CTEST_BINARY_DIRECTORY}" RETURN_VALUE res)<br />
ctest_build (BUILD "${CTEST_BINARY_DIRECTORY}" RETURN_VALUE res)<br />
ctest_test (BUILD "${CTEST_BINARY_DIRECTORY}" RETURN_VALUE res)<br />
ctest_submit (RETURN_VALUE res)<br />
</syntaxhighlight><br />
<br />
The ctest_empty_binary_directory command empties the directory and all subdirectories. Please note that this command has a safety measure built in, which is that it will only remove the directory if there is a CMakeCache.txt file in the top level directory. This was intended to prevent CTest from mistakenly removing a non-build directory.<br />
<br />
The rest of the block contains the calls to the actual CTest functions. Each of them corresponds to a CTest -D option. For example, instead of:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -D ExperimentalBuild<br />
</syntaxhighlight><br />
<br />
the script would contain:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest_start (Experimental)<br />
ctest_build (BUILD "${CTEST_BINARY_DIRECTORY}" RETURN_VALUE res)<br />
</syntaxhighlight><br />
<br />
Each step yields a return value, which indicates if the step was successful. For example, the return value of the Update stage can be used in a continuous dashboard to determine if the rest of the dashboard should be run.<br />
<br />
To demonstrate some advantages of using extended CTest scripting, let us examine a more advanced CTest script. This script drives testing of an application called Slicer. Slicer uses CMake internally, but it drives the build process through a series of Tcl scripts. One of the problems of this approach is that it does not support out-of-source builds. Also, on Windows certain modules come pre-built, so they have to be copied to the build directory. To test a project like that, we would use a script like this:<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required (VERSION 2.2)<br />
<br />
# set the dashboard specific variables -- name and notes<br />
set (CTEST_SITE "dash11.kitware")<br />
set (CTEST_BUILD_NAME "Win32-VS71")<br />
set (CTEST_NOTES_FILES<br />
"${CTEST_SCRIPT_DIRECTORY}/${CTEST_SCRIPT_NAME}")<br />
<br />
# do not let any single test run for more than 1500 seconds<br />
set (CTEST_TIMEOUT "1500")<br />
<br />
# set the source and binary directories<br />
set (CTEST_SOURCE_DIRECTORY "C:/Dashboards/MyTests/slicer2"}<br />
set (CTEST_BINARY_DIRECTORY "$(CTEST_SOURCE_DIRECTORY}-build")<br />
<br />
set (SLICER_SUPPORT<br />
"//Dash11/Shared/Support/SlicerSupport/Lib")<br />
set (TCLSH "${SLICER_SUPPORT}/win32/bin/tclsh84.exe")<br />
<br />
# set the complete update, configure and build commands<br />
set (CTEST_UPDATE_COMMAND<br />
"C:/Program Files/TortoiseCVS/cvs.exe")<br />
set (CTEST_CONF IGURE_COMMAND<br />
"\"${TCLSH}\"<br />
\"$(CTEST_BINARY_DIRECTORY}/Scripts/genlib.tcl\"")<br />
set (CTEST_BUILD COMMAND<br />
"\"${TCLSH}\"<br />
\"${CTEST_BINARY_DIRECTORY}/Scripts/cmaker.tcl\**)<br />
<br />
# clear out the binary tree<br />
file (WRITE "${CTEST_BINARY_DIRECTORY}/CMakeCache.txt"<br />
"// Dummy cache just so that ctest will wipe binary dir")<br />
ctest_empty_binary_directory (${CTEST_BINARY_DIRECTORY})<br />
<br />
# special variables for the Slicer build process<br />
set (ENV{MSVC6} "O")<br />
set (ENV{GENERATOR} "Visual Studio 7 .NET 2003")<br />
set (ENV{MAKE} "devenv.exe ")<br />
set (ENV (COMPILER_PATH)<br />
"C:/Program Files/Microsoft Visual Studio .NET<br />
2003/Common7/Vc7/bin")<br />
set (ENV(CVS} "$({CTEST_UPDATE_COMMAND}")<br />
<br />
# start and update the dashboard<br />
ctest_start (Nightly)<br />
ctest_update (SOURCE "${CTEST_SOURCE_DIRECTORY}")<br />
# define a macro to copy a directory<br />
macro (COPY_DIR sredir destdir)<br />
exec_program ("${CMAKE_EXECUTABLE_NAME)" ARGS<br />
"-E copy_directory \"${sredir}\"\"${destdir}\"")<br />
endmacro ()<br />
<br />
# Slicer does not support out of source builds so we<br />
# first copy the source directory to the binary directory<br />
# and then build it<br />
copy_dir ("${CTEST_SOURCE_DIRECTORY}"<br />
"${CTEST_BINARY_DIRECTORY}")<br />
<br />
# copy support libraries that slicer needs into the binary tree<br />
copy_dir ("${SLICER_SUPPORT}"<br />
"${CTEST_BINARY_DIRECTORY}/Lib")<br />
<br />
# finally do the configure, build, test and submit steps<br />
ctest_configure (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_build (BUILD "${CTEST_BINARY_DIRECTORY }")<br />
ctest_test (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_submit ()<br />
</syntaxhighlight><br />
<br />
With extended CTest scripting we have full control over the flow, so we can perform arbitrary commands at any point For example, after performing an update of the project, the script copies the source tree into the build directory. This allows it to do an "out-of-source" build.<br />
<br />
<br />
===Setting up a Dashboard Server===<br />
<br />
For many projects, using Kitware's my.cdash.org dashboard hosting will be sufficient. If that is the case for you, then you can skip this section. If you wish to setup your own server, then this section will walk you through the process. There are a few options for what to run on the server to process the dashboard results. The preferred option is to use CDash, a dashboard server based on PHP, MySQL, CSS, and XSLT. Predecessors to CDash such as DART 1 and DART 2 can also be used. Information on the DART systems can be found at http://www.itk.org/Dart/HTML/lndex.shtml.<br />
<br />
<br />
====CDash Server====<br />
<br />
CDash is a dashboard server developed by Kitware that is based on the common "LAMP stack." It makes use of PHP, CSS, XSL, MySQL/PostgreSQL, and of course your web server (normally Apache). CDash takes the dashboard submissions as XML and stores them in an SQL database (currently MySQL and PostgreSQL are supported). When the web server receives requests for pages, the PHP scripts extract the relevant data from the database and produce XML that is sent to XSL templates, that in turn convert it into HTML. CSS is used to provide the overall look and feel for the pages. CDash can handle large projects, and has been hosting up to 30 projects on a reasonable web-server, with just over 200 million records and about 89 Gigabytes in the database, stored on a separate database-server machine.<br />
<br />
<br />
=====Server requirements=====<br />
<br />
* MySQL (5.x and higher) or PostgreSQL (8.3 and higher)<br />
* PHP (5.0 recommended)<br />
* XSL module for PHP (apt-get install php5-xsl)<br />
* cURL module for PHP<br />
* GD module for PHP<br />
<br />
=====Gettting CDash=====<br />
<br />
You can get CDash from the www.cdash.org website, or you can get the latest code from SYN using the following command:<br />
<br />
<syntaxhighlight lang="text"><br />
svn co https://www.kitware.com/svn/CDash/trunk CDash<br />
</syntaxhighlight><br />
<br />
=====Quick installation=====<br />
<br />
1. Unzip or checkout CDash in your webroot directory on the server. Make sure the web server has read permission to the files<br />
<br />
2. Create a cdash/config.local.php and add the following lines, adapted for your server configuration:<br />
<br />
<syntaxhighlight lang="text"><br />
// Hostname of the database server<br />
SCDASH_DB_HOST = 'localhost';<br />
<br />
// Login for database access<br />
SCDASH_DB_LOGIN = 'root';<br />
<br />
// Password for database access<br />
SCDASH_DB_PASS = '';<br />
<br />
// Name of the database<br />
SCDASH_DB_NAME = 'cdash';<br />
<br />
// Database type<br />
SCDASH_DB_TYPE = 'mysql';<br />
</syntaxhighlight><br />
<br />
3. Point your web browser to the install.php script:<br />
<br />
<syntaxhighlight lang="text"><br />
http://mywebsite.com/CDash/install.php<br />
</syntaxhighlight><br />
<br />
4. Follow the installation instructions<br />
<br />
5. When the installation is done, add the following line in the config.local.php to ensure the in stallation script is no longer accessible<br />
<br />
<syntaxhighlight lang="text"><br />
$CDASH_PRODUCTION_MODE = true;<br />
</syntaxhighlight><br />
<br />
<br />
=====Testing the installation=====<br />
<br />
In order to test the installation of the CDash server, you can download a small test project and test the submission to CDash by following these steps:<br />
<br />
1. Download and unzip the test project at:<br />
<br />
<syntaxhighlight lang="text"><br />
http://www.cdash.org/download/CDashTest.zip<br />
</syntaxhighlight><br />
<br />
2. Create a CDash project named "test" on your CDash server (see 10.7 Producing Test Dashboards)<br />
<br />
3. Download the CTestConfig.cmake file from the CDash server, replacing the existing one in CDashTest with the one from your server<br />
<br />
4. Run CMake on CDashTest to configure the project<br />
<br />
5. Run:<br />
<br />
<syntaxhighlight lang="text"><br />
make Experimental<br />
</syntaxhighlight><br />
<br />
6. Go to the dashboard page for the "test" project, you should see the submission in the Experimental section.<br />
<br />
<br />
====Advanced Server Management====<br />
<br />
=====Project Roles : CDash supports three role levels for users:=====<br />
<br />
* Normal users are regular users with read and/or write access to the project's code repository.<br />
* Site maintainers are responsible for periodic submissions to CDash .<br />
* Project administrators have reserved privileges to administer the project in CDash.<br />
<br />
The first two levels can be defined by the users themselves. Project administrator access must be granted by another administrator of the project, or a CDash server administrator.<br />
<br />
In order to change the current role for a user:<br />
<br />
# Select [Manage project roles] in the administration section<br />
# If you have more than one project, select the appropriate project<br />
# In the "current users" section, change the role for a user<br />
# Click "update" to update the current role<br />
# In order to completely remove a user from a project, click "remove"<br />
# If the CVS login is not correct it canbe changed from this page. Note that users can also change their CVS login manually from their profile<br />
<br />
In order to add a current role for a user:<br />
<br />
# Select [Manage project roles] in the administration section<br />
# Then, if you have more than one project, select the appropriate project<br />
# In the "Add new user" section type the first letters of the first name, last name, or email address of the user you want to add. Or type '%' in order to show all the users registered in CDash<br />
# Select the appropriate user's role<br />
# Optionally enter the user's CVS login<br />
# Click on "add user"<br />
<br />
<<Figure 10.5 : Project Role management page in CDash>><br />
<br />
<br />
=====Importing users : to batch import a list of current users for a given project=====<br />
<br />
1. Click on [manage project role] in the administration section<br />
2. Select the appropriate project<br />
3. Click "Browse" to select a CVS users file.<br />
4. The file should be formatted as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
cvsuser:email:first_name last_name<br />
</syntaxhighlight><br />
<br />
5. Click "import"<br />
6. Make sure the reported names and email addresses are correct; deselect any that should not be imported<br />
7. Click on "Register and send email". This will automatically register the users, set a random password and send a registration request to the appropriate email addresses.<br />
<br />
<br />
=====Google Analytics=====<br />
<br />
Usage statistics of the CDash server can be assessed using Google Analytics. In order to setup google analytics:<br />
<br />
# Go to http://www.google.com/analytics/index.html<br />
# Setup an account, if necessary<br />
# Add a website project<br />
# Login into CDash as the administrator of a project<br />
# Click on "Edit Project"<br />
# Add the code from Google into the Google Analytics Tracker (i.e. UA-43XXXX-X) for your project<br />
<br />
<br />
=====Submission backup=====<br />
<br />
CDash backups all the incoming XML submissions and places them in the backup directory by default. The default timeframe is 48 hours. The timeframe can be changed in the config.local.php as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
$CDASH_BACKUP_TIMEFRAME=72;<br />
</syntaxhighlight><br />
<br />
If projects are private it is recommended to set the backup directory outside of the apache root directory to make sure that nobody can access the XML files, or to add the following lines to the .htaccess in the backup directory:<br />
<br />
<syntaxhighlight lang="text"><br />
<Files *><br />
order allow,deny<br />
deny from all<br />
</Files><br />
</syntaxhighlight><br />
<br />
Note that the backup directory is emptied only when a new submission arrives. If necessary, CDash can also import builds from the backup directory.<br />
<br />
# Log into CDash as administrator<br />
# Click on [Import from backups] in the administration section<br />
# Click on "Import backups"<br />
<br />
<br />
====Build Groups====<br />
<br />
Builds can be organized by groups. In CDash, three groups are defined automatically and cannot be removed: Nightly, Continuous and Experimental. These groups are the same as the ones imposed by CTest. Each group has an associated description that is displayed when clicking on the name of the group on the main dashboard.<br />
<br />
<br />
=====To add a new group:=====<br />
<br />
# Click on [manage project groups] in the administration section<br />
# Select the appropriate project<br />
# Under the "create new group" section enter the name of the new group<br />
# Click on "create group". The newly created group appears at the bottom of the current dashboard<br />
<br />
<br />
=====To order groups:=====<br />
<br />
# Click on [manage project groups] in the administration section<br />
# Select the appropriate project<br />
# Under the "Current Groups" section, click on the [up] or [down] links. The order displayed in this page is exactly the same as the order on the dashboard<br />
<br />
<br />
=====To update group description:=====<br />
<br />
# Click on [manage project groups] in the adm inistration section<br />
# Select the appropriate project<br />
# Under the "Current Groups" section, update or add a description in the field next to the [up]/[down] links<br />
# Click "Update Description" in order to commit your changes<br />
<br />
By default, a build belongs to the group associated with the build type defined by CTest, i.e. a nightly build will go in the nightly section. CDash matches a build by its name, site, and build type. For instance, a nightly build named "Linux-gcc-4.3" from the site "midworld.kitware" will be moved to the nightly section unless a rule on "Linux-gcc-4.3"-"midworld.kitware"-"Nightly" is defined. There are two ways to move a build into a given group by defining a rule: Global Move and Single Move.<br />
<br />
<br />
=====Global move all ows moving builds in batch.=====<br />
<br />
# Click on [manage project groups] in the administration section.<br />
# Select the appropriate project (if more than one).<br />
# Under "Global Move" you will see a list of the builds submitted in the past 7 days (without duplicates). Note that expected builds are also shown, even if they have not been submitting for the past 7 days.<br />
# You can narrow your search by selecting a spec ific group (default is All).<br />
# Select the build s to move. Hold "shift" in order to select multiple builds .<br />
# Select the target group. This is mandatory.<br />
# Optionally check the "expected" box if you expect the builds to be submitted on a daily basis. For more information on expected builds, see the "Expected builds" section below.<br />
# Click "Move Selected Builds to Group" to move the groups.<br />
<br />
<br />
=====Single move allows modifying only a particular build.=====<br />
<br />
If logged in as an administrator of the project, a small folder icon is displayed next to each build on the main dashboard page. Clicking on the icon shows some options for each build. In particular, project administrators can mark a build as expected, move a build to a specific group, or delete a bogus build.<br />
<br />
Expected builds: Project administrators can mark certain builds as expected. That means builds are expected to submit daily. This all ows you to quickly check if a build has not been submitting on today's dash board, or to quickly assess how long the build has been missing by clicking on the info icon on the main dashboard.<br />
<br />
<<Figure 10.6: Information regarding a build from the main dash board page>><br />
<br />
If an expected build was not submited the previous day and the option "Email Build Missing" is checked for the project, an email will be sent to the site maintainer and project administrator to alert them (see the Sites section for more information).<br />
<br />
<br />
====Email====<br />
<br />
CDash sends email to developers and project administrators when a failure occurs for a given build. The configuration of the email feature is located in three places: administration section, and the project's groups section .<br />
<br />
In the the config.local.php file, the project config.local.php, two variables are defined to specify the email address from which email is sent and the reply address. Note that the SMTP server can not be defined in the current version of CDash, it is assumed that a local email server is running on the machine.<br />
<br />
<syntaxhighlight lang="text"><br />
$CDASH_EMAIL_FROM = 'admin@mywebsite.com';<br />
$CDASH_EMAIL_REPLY = 'noreply@mywebsite.com';<br />
</syntaxhighlight><br />
<br />
<<Figure 10.7: Build Group Configuration Page>><br />
<br />
In the email configuration section of the project, several parameters can be tuned to control the email feature. These parameters were described in the previous section, "Adding CDash Support to a Project" .<br />
<br />
In the "build groups" admini stration section of a project, an administrator can decide if emails are sent to a specific group, or if only a summary email should be sent. The summary email is sent for a given group when at least one build is failing on the current day.<br />
<br />
<br />
====Sites====<br />
<br />
CDash refers to a site as an individual machine submitting at least one build to a given project. A site might submit multiple builds (e.g. nightly and continuous) to multiple projects stored in CDash.<br />
<br />
In order to see the site description, click on the name of the site from the main dashboard page for a project. The description of a site includes information regarding the processor type and speed, as well as the amount of memory available on the given machine. The description of a site is automatically sent by CTest, however in some cases it might be required to manually edit it. Moreover, if the machine is upgraded, i.e. the memory is upgraded; CDash keeps track of the history of the description, allowing users to compare performance before and after the upgrade.<br />
<br />
Sites usually belong to one maintainer, responsible for the submissions to CDash. It is important for site maintainers to be warned when a site is not submitting as it could be related to a configuration issue. In order to claim a site, a maintainer should<br />
<br />
# Log into CDash<br />
# Click on a dashboard containing at least one build for the site<br />
# Click on the site name to open the description of the site<br />
# Click on [claim this site]<br />
<br />
Once a site is claimed, its maintainer will receive emails if the client machine does not submit for an unknown reason, assuming that the site is expected to submit nightly. Furthermore, the site will appear in the "My Sites" section of the maintainer's profile, facilitating a quick check of the site's status.<br />
<br />
Another feature of the site page is the pie chart showing the load of the machine. Assuming that a site submits to multiple projects, it is usually useful to know if the machine has room for other submissions to CDash. The pie chart gives an overview of the machine submission time for each project.<br />
<br />
====Graphs====<br />
<br />
CDash curently plots three types of graph. The graphs are generated dynamically from the database records, and are interactive.<br />
<br />
<<Figure 10.8: Pie chart showing how much time is spent by a given site on building CDash projects>><br />
<br />
<<Figure 10.9: Map showing the location of the different sites building>><br />
<br />
<<Figure 10.10: Example of build time over time>><br />
<br />
The build time graph displays the time required to build a project over time. In order to display the graph you need to:<br />
<br />
# Go to the main dashboard for the project.<br />
# Click on the build name you want to track.<br />
# On the build summary page, click on [Show Build Time Graph].<br />
<br />
The test time graphs display the time to run a specific test as well as its status (passed/Failed) over time. To display it:<br />
<br />
# Go to the main dashboard for a project.<br />
# Click on the number of test passed or failed.<br />
# From the list of tests, click on the status of the test.<br />
# Click on [Show Test Time Graph] and/or [Show Failing/Passing Graph].<br />
<br />
<br />
====Adding Notes to a Build====<br />
<br />
In some cases, it is useful to inform other developers that someone is currently looking at the errors for a build. CDash implements a simple note mechanism for that purpose:<br />
<br />
# Login to CDash.<br />
# On the dashboard project page, click on the build name that you would like to add the note to.<br />
# Click on the [Add a Note to this Build] link, located next to the current build matrix (see thumbnail).<br />
# Enter a short message that will be added as a note.<br />
# Select the status of the note: Simple note, Fix in progress Fixed.<br />
# Click on "Add Note".<br />
<br />
<br />
====Logging====<br />
<br />
CDash supports an internal logging mechanism using the error_log() PHP function. Any critical SQL errors are logged. By default, the CDash log file is located in the backup directory under the name cdash.log. The location of the log file can be modified by changing the variable in the config.local.php configuration file.<br />
<br />
<syntaxhighlight lang="text"><br />
$CDASH_BACKUP_DIRECTORY='/vat/temp/cdashbackup/log';<br />
</syntaxhighlight><br />
<br />
The log file can be accessed directly from CDash if the log file is in the standard location:<br />
<br />
# Log into CDash as administrator.<br />
# Click on [CDash logs] in the administration section.<br />
# Click on cdash.log to see the log file.<br />
<br />
CDash 2.0 introduced a log file rotation feature.<br />
<br />
<br />
====Test Timing====<br />
<br />
CDash supports checks o n the duration o f tests. CDash keeps the current weighted average o f the mean and<br />
standard deviation for the time each test takes to run in the database. In order to keep the computation as<br />
efficient as possible the fol lowing formula is used, which only involves the previous build.<br />
<br />
<syntaxhighlight lang="text"><br />
// alpha is the current "window" for the computation<br />
// By default, alpha is 0.3<br />
newMean = (1-alpha) * oldMean + alpha * currentTime<br />
<br />
newSD = sqrt((1-alpha) * SD * SD +<br />
alphas (currentTime-newMean) * (currentTime-newMean)<br />
</syntaxhighlight><br />
<br />
A test is defined as having failed timing based on the following logic :<br />
<br />
<syntaxhighlight lang="text"><br />
if previousSD < thresholdSD then previousSD = thresholdSD<br />
if currentTime > previousMean + multiplier * previousSD then fail<br />
</syntaxhighlight><br />
<br />
<br />
====Mobile Support====<br />
<br />
Since CDash is written using template layers via XSLT, developing new layouts is as simple as adding new rendering templates. As a demonstration, an iPhone web template is provided with the current version of CDash.<br />
<br />
<syntaxhighlight lang="text"><br />
</syntaxhighlight><br />
<br />
The main page shows a list of the public projects hosted on the server. Clicking on the name of a project loads its current dashboard. In the same manner, clicking on a given build displays more detailed information about that build. As of this writing, the ability to login and to access private sections of CDash are not supported with this layout.<br />
<br />
<br />
====Backing up CDash====<br />
<br />
All of the data (except the logs) used by CDash is stored in its database. It is important to backup the database regularly, especially so before performing a CDash upgrade. There are a couple of ways to backup a MySQL database. The easiest is to use the mysqldump<pre>http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html</pre> command:<br />
<br />
<syntaxhighlight lang="text"><br />
mysqldump -r cdashbackup.sql cdash<br />
</syntaxhighlight><br />
<br />
If you are using My ISAM tables exclusively, you can copy the CDash directory in your MySQL data directory. Note that you need to shutdown MySQL before doing the copy so that no file could be changed during the copy. Similarly to MySQL, PostGreSQL has a pg_dump utility:<br />
<br />
<syntaxhighlight lang="text"><br />
pg_dump -U postgreSQL_user cdash > cdashbackup.sql<br />
</syntaxhighlight><br />
<br />
<br />
====Upgrading CDash====<br />
<br />
When a new version of CDash i s released or if you decide to update from the SVN repository, CDash will<br />
warn you on the front page i f the current database needs to be upgraded. When upgrading to a new release<br />
version the following steps should be taken:<br />
<br />
# Backup your SQL database (see previous section).<br />
# Backup your config.local.php (or config.php) configuration files.<br />
# Replace your current cdash directory with the latest version and copy the config.local.php in the cdash directory.<br />
# Navigate your browser to your CDash page. (e.g. http://localhost/CDash).<br />
# Note the version number on the main page, it should match the version that you are upgrading to.<br />
# The following message may appear: "The current database shema doesn't match the version of CDash you are running, upgrade your database structure in the Administration panel of CDash." This is a helpful reminder to perform the following steps.<br />
# Login to CDash as administrator.<br />
# In the 'Administration' section, click on '[CDash Maintenance]'.<br />
# Click on 'Upgrade CDash': this process might take some time depending on the size of your database (do not close your browser).<br />
#* Progress messages may appear wh ile CDash performs the upgrade.<br />
#* If the upgrade process takes too long you can check in the backup/cdash.log file to see where the process is taking a long time and/or failing.<br />
#* It has been reported that on some systems the spinning icon never turns into a check mark. Please check the cdash.log for the "Upgrade done." string if you feel that the upgrade is taking too long.<br />
#* On a 50GB database the upgrade might take up to 2 hours.<br />
# Some web browsers might have issues when upgrading (with some javascript variables not being passed<br />
correctly), in that case you can perform individual updates. For example, upgrading from CDash 1-2 to 1-4:<br />
<br />
<syntaxhighlight lang="text"><br />
http://mywebsite.com/CDash/backwardCompatibilityTools.php?updatede-1-4=1<br />
</syntaxhighlight><br />
<br />
<<Figure 10.11: Example of dashboard on the iPhone>><br />
<br />
<br />
====CDash Maintenance====<br />
<br />
Database maintenance: we recommend that you perform database optimization (reindexing, purging, etc.) regularly to maintain a stable database. MySQL has a utility called mysqlcheck, and PostgreSQL has several utilities such as vacuumdb.<br />
<br />
Deleting builds with incorrect dates: some builds might be submitted to CDash with the wrong date, either because the date in the XML file is incorrect or the timezone was not recognized by CDash (mainly by PHP). These builds will not show up in any dashboard because the start time is bogus. In order to remove these builds:<br />
<br />
# Login to CDash as administrator.<br />
# Click on [CDash maintenance] in the administration section.<br />
# Click on 'Delete builds with wrong start date'.<br />
<br />
Recompute test timing: if you just upgraded CDash you might notice that the current submissions are showing a high number of failing test due to time defects. This is because CDash does not have enough sample points to compute the mean and standard deviation for each test, in particular the standard deviation might be very small (probably zero for the first few samples). You should turn the "enable test timing" off for about a week, or until you get enough build submissions and CDash has calculated an approximate mean and standard deviation for each test time.<br />
<br />
The other option is to force CDash to compute the mean and standard deviation for each test for the past few days. Be warned that this process may take a long time, depending on the number of test and projects involved. In order to recompute the test timing:<br />
<br />
# Login to CDash as administrator.<br />
# Click on [CDash maintenance] in the administration section.<br />
# Specify the number of days (default is 4) to recompute the test timings for.<br />
# Click on "Compute test timing". When the process is done the new mean, standard deviation, and status should be updated for the tests submitted during this period.<br />
<br />
<br />
=====Automatic build removal=====<br />
<br />
In order to keep the database at a reasonable size, CDash can automatically purge old builds. There are currently two ways to setup automatic removal of builds: without a cronjob, edit the config.local.php and add/edit the following line<br />
<br />
<syntaxhighlight lang="text"><br />
$CDASH_AUTOREMOVE_BUILDS='1';<br />
</syntaxhighlight><br />
<br />
CDash will automatically remove builds on the first submission of the day. Note that removing builds might add an extra load on the database, or slow down the current submission process if your database is large and the number of submissions is high. If you can use a cronjob the PHP command line tool can be used to trigger build removals at a convenient time. For example, removing the builds for all the projects at 6am every Sunday:<br />
<br />
<syntaxhighlight lang="text"><br />
0 6 * * 0 php5 /var/www/CDash/autoRemoveBuilds.php all<br />
</syntaxhighlight><br />
<br />
Note that the 'all' parameter can be changed to a specific project name in order to purge buil ds from a single project.<br />
<br />
<br />
=====CDash XML Schema=====<br />
<br />
The XML parsers in CDash can be easily extended to support new features. The current XML schemas generated by CTest, and their features as described in the book, are located at:<br />
<br />
<syntaxhighlight lang="text"><br />
http://public.kitware.com/Wiki/CDash:XML<br />
</syntaxhighlight><br />
<br />
====Subprojects====<br />
<br />
CDash (versions 1.4 and later) supports splitting projects into subprojects. Some of the subprojects may in turn depend on other subprojects. A typical real life project consists of libraries, executables, test suites, documentation, web pages, and installers. Organizing your project into well-defined subprojects and presenting<br />
the results of nightly builds on a CDash dashboard can help identify where the problems are at different levels of granularity.<br />
<br />
A project with subprojects has a different view for its top level CDash page than a project without any. It<br />
contains a summary row for the project as a whole, and then one summary row for each subproject.<br />
<br />
<br />
=====Organizing and defining subprojects=====<br />
<br />
To add subproject organization to your project, you must: (1) define the subprojects for CDash, so that it knows how to display them properly and (2) use build scripts with CTest to submit subproject builds of your project. Some (re-)organization of your project's CMakeLists.txt files may also be necessary to allow building of your project by subprojects.<br />
<br />
<<Figure 10.12: Main project page with subprojects>><br />
<br />
There are two ways to define subprojects and their dependencies: interactively in the CDash GUI when logged in as a project administrator, or by submitting a Project.xml file describing the subprojects and dependencies.<br />
<br />
<br />
=====Adding Subprojects Interactively=====<br />
<br />
As a project administrator, a "Manage subprojects" button will appear for each of your projects on the My CDash page. Clicking the Manage Subprojects button opens the manage subproject page, where you may add new subprojects or establish dependencies between existing subprojects for any project that you are an administrator of. There are two tabs on this page: one for viewing the current subprojects along with their dependencies, and one for creating new subprojects.<br />
<br />
To add subprojects, for instance two subprojects called Exes and Libs, and to make Exes depend on Libs, the following steps are necessary :<br />
<br />
* Click the "Add a subproject" tab.<br />
* Type "Exes" in the "Add a subproject" edit field.<br />
* Click the "Add subproject" button.<br />
* Click the "Add a subproj ect" tab.<br />
* Type "Libs" in the "Add a subproject" edit field.<br />
* Click the "Add Subproject" button .<br />
* In the "Exes" row of the "Current Subprojects" tab, choose "Libs" from the "Add dependency" drop downlist and click the "Add dependency" button.<br />
<br />
To remove a dependency or a subproject, click on the "X" next to the item you wish to delete.<br />
<br />
<br />
=====Adding Subprojects Automatically=====<br />
<br />
Another way to define CDash subprojects and their dependencies is to submit a "Project.xml" file along with the usual submission files that CTest sends when it submits a build to CDash. To define the same two subprojects as in the interactive example above (Exes and Libs) with the same dependency (Exes depend on Libs), the Project.xml file would look like the following example:<br />
<br />
<syntaxhighlight lang="text"><br />
<Project name="Tutorial"><br />
<SubProject name="Libs"></SubProject><br />
<SubProject name="Exes"><br />
<Dependency name="Libs"><br />
</SubProject><br />
</Project><br />
</syntaxhighlight><br />
<br />
Once the Project.xml file is written or generated, it can be submitted to CDash from a ctest -S script using the new FILES argument to the ctest_submit command, or directly from the ctest command line in a build tree configured for dashboard submission.<br />
<br />
From inside a ctest -S script:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest_submit(FILES "${CTEST_BINARY_DIRECTORY}/Project.xml")<br />
</syntaxhighlight><br />
<br />
From the command line:<br />
<br />
<syntaxhighlight lang="text"><br />
cd ../Project-build<br />
ctest --extra-submit Project.xml<br />
</syntaxhighlight><br />
<br />
CDash will automatically add subprojects and dependencies according to the Project.xml file. CDash will also remove any subprojects or dependencies not defined in the Project.xml file. Additionally, if the same Project.xml is submitted multiple times, the second and subsequent submissions will have no observable effect: the first submission adds/modifies the data, the second and later submissions send the same data, so no changes are necessary. CDash tracks changes to the subproject definitions over time to allow for projects to evolve. If you view dashboards from a past date, CDash will present the project/subproject views according to the subproject definitions in effect on that date.<br />
<br />
<br />
====Using ctest_submit with PARTS and FILES====<br />
<br />
In CTest version 2.8 and later, the ctest_submit() (page 354) command supports new PARTS and FILES arguments. With PARTS, you can send any subset of the xml files with each ctest_submit call. Previously, all parts would b e sent with any call to ctest_submit. Typically, the script would wait until all dashboard stages were complete and then call ctest_submit once to send the results of all stages at the end of the run. Now, a script may call ctest_submit with PARTS to do partial submissions of subsets of the results. For example, you can submit configure results after ctest_configure() (page 352), build results after ctest_build() (page 351), and test results after ctest_test() (page 355). This allows for information to be posted as the builds progress.<br />
<br />
With FILES, you can send arbitrary XML files to CDash. In addition to the standard build result XML files that CTest sends, CDash also handles the new Project.xml file that describes subprojects and dependencies. Prior to the addition of the ctest_submit PARTS handling, a typical dashboard script would contain a single ctest_submit() call on its last line<br />
<br />
<syntaxhighlight lang="text"><br />
ctest_start (Experimental)<br />
ctest_update (SOURCE "${CTEST_SOURCE_DIRECTORY}")<br />
ctest_configure (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_build (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_test (BUILD "${CTEST_BINARY_ DIRECTORY}")<br />
ctest_submit ()<br />
</syntaxhighlight><br />
<br />
Now, submissions can occur incrementally, with each part of the submission sent piecemeal as it becomes available:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest_start (Experimental)<br />
ctest_update (SOURCE "${CTEST_SOURCE_DIRECTORY}")<br />
ctest_configure (BUILD "${CTEST_BINARY DIRECTORY}")<br />
ctest_submit (PARTS Update Configure Notes)<br />
<br />
ctest_build (BUILD "*${CTEST_BINARY_DIRECTORY}" APPEND)<br />
ctest_submit (PARTS Build)<br />
<br />
ctest_test (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_submit (PARTS Test)<br />
</syntaxhighlight><br />
<br />
Submitting incrementally by parts means that you can inspect the results of the configure stage live on the CDash dashboard while the build is still in progress. Likewise, you can inspect the results of the build stage live while the tests are still running. When submitting by parts, it's important to use the APPEND keyword in the ctest_build command. If you don't use APPEND, then CDash will erase any existing build with the same build name, site name, and build stamp when it receives the Build.xml file.<br />
<br />
<br />
====Splitting Your Project into Multiple Subprojects====<br />
<br />
One ctest_build() (page 351) invocation that builds everything, followed by one ctest_test() (page 355) invocation that tests everything is sufficient for a project that has no subprojects, but if you want to submit results on a per-subproject basis to CDash, you will have to make some changes to your project and test scripts. For your project you need to identify what targets are part of what sub-projects. If you organize your CMakeLists files such that you have a target to build for each subproject, and you can derive (or look up) the name of that target based on the subproject name, then revising your script to separate it into multiple smaller configure/build/test chunks should be relatively painless. To do this, you can modify your CMakeLists files in various ways depending on your needs. The most common changes are listed below.<br />
<br />
<br />
=====CMakelists.txt modifications=====<br />
<br />
* Name targets the same as subprojects, base target names on subproject names, or provide a look up mechanism to map from subproject name to target name.<br />
* Possibly add custom targets to aggregate existing targets into subprojects, using add_dependencies to say which existing targets the custom target depends on.<br />
* Add the LABELS target property to targets with a value of the subproject name.<br />
* Add the LABELS test property to tests with a value of the subproject name.<br />
<br />
Next, you need to modify your CTest scripts that run your dashboards. To split your one large monolithic<br />
build into smaller subproject builds, you can use a foreach loop in your CTest driver script. To help you<br />
iterate over your subprojects, CDash provides a variable named CTEST_PROJECT_SUBPROJECTS in<br />
CTestConfig.cmake. Given the above example, CDash produces a variable like this:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_PROJECT_SUBPROJECTS Libs Exec)<br />
</syntaxhighlight><br />
<br />
CDash orders the elements in this list such that the independent subprojects (that do not depend on any other subprojects) are first, followed by subprojects that depend only on the independent subprojects, and after that subprojects that depend on those. The same logic continues until all subprojects are listed exactly once in this list in an order that makes sense for building them sequentially, one after the other.<br />
<br />
To facilitate building just the targets associated with a subproject, use the variable named CTEST_BUILD_TARGET to tell:command:ctest_build what to build. To facilitate running just the tests as sociated with a subproject, assign the LABELS test property to your tests and use the new INCLUDE_LABEL argument toctest_test() (page 355).<br />
<br />
<br />
=====ctest driver script modifications=====<br />
<br />
* Iterate over the subprojects in dependency order (from independent to most dependent...).<br />
* Set the SubProject and Label global properties - CTest uses these properties to submit the results to the correct subproject on the CDash server.<br />
* Build the target(s) for this subproject: compute the name of the target to build from the subproject name, set CTEST_BUILD_TARGET, call ctest_build.<br />
* Run the tests for this subproject using the INCLUDE or INCLUDE_LABEL arguments to ctest_ctest.<br />
* Use ctest_submit with the PARTS argument to submit partial results as they complete.<br />
<br />
<br />
To illustrate this, the following example shows the changes required to split a build into smaller pieces. Assume that the subproject name is the same as the target name required to build the subproject's components. For example, here is a snippet from CMakeLists.txt, in the hypothetical Tutorial project. The only additions necessary (since the target names are the same as the subproject names) are the calls to set_property() (page 329) for each target and each test.<br />
<br />
<syntaxhighlight lang="text"><br />
# "Libs" is the library name (therefore a target name) and<br />
# the subproject name<br />
add_library (Libs ...)<br />
set_property (TARGET Libs PROPERTY LABELS Libs)<br />
add_test (LibsTest1 ...)<br />
add_test (LibsTest2 ...)<br />
set_property (TEST LibsTest1 LibsTest2 PROPERTY LABELS Libs)<br />
<br />
# "Exes" is the executable name (therefore a target name)<br />
# and the subproject name<br />
add_executable (Exes ...)<br />
target_link_libraries (Exes Libs)<br />
set_property (TARGET Exes PROPERTY LABELS Exes)<br />
add_test (ExesTest1 ...)<br />
add_test (ExesTest2 ...)<br />
set_property (TEST ExesTest1 ExesTest2 PROPERTY LABELS Exes)<br />
</syntaxhighlight><br />
<br />
Here is an example of what the CTest driver script might look like before and after organizing this project into subprojects. Before the changes<br />
<br />
<syntaxhighlight lang="text"><br />
ctest_start (Experimental)<br />
ctest_update (SOURCE "${CTEST_SOURCE_DIRECTORY}")<br />
ctest_configure (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
# builds *all* targets: Libs and Exes<br />
ctest_build (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
</syntaxhighlight><br />
<br />
After the changes:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest_start (Experimental)<br />
ctest_update (SOURCE "${CTEST_SOURCE_DIRECTORY}")<br />
ctest_submit (PARTS Update Notes)<br />
<br />
# to get CTEST_PROJECT_SUBPROJECTS definition:<br />
include ("${CTEST_SOURCE_DIRECTORY}/CTestConfig.cmake")<br />
foreach (subproject ${CTEST_PROJECT_SUBPROJECTS})<br />
set_property (GLOBAL PROPERTY SubProject ${subproject})<br />
set_property (GLOBAL PROPERTY Label ${subproject})<br />
<br />
ctest_configure (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_submit (PARTS Configure)<br />
<br />
set (CTEST_BUILD_TARGET "${subproject}")<br />
ctest_buiid (BUILD "${CTEST_BINARY_DIRECTORY}" APPEND)<br />
# builds target ${CTEST_BUILD_TARGET}<br />
ctest_submit (PARTS Build)<br />
ctest_test (BUILD "${CTEST_BINARY_DIRECTORY}"<br />
INCLUDE_LABEL "${subproject}"<br />
)<br />
<br />
# runs only tests that have a LABELS property matching<br />
# "${subproject}"<br />
ctest_submit (PARTS Test)<br />
endforeach ()<br />
</syntaxhighlight><br />
<br />
In some projects, more than one ctest_build step may be required to build all the pieces of the subproject. For example, in Trilinos, each subproject builds the ${subproject}_libs target, and then builds the all target to build all the configured executables in the test suite. They also configure dependencies such that only the executables that need to be built for the currently configured packages build when the all target is built.<br />
<br />
Normally, if you submit multiple Build.xml files to CDash with the same exact build stamp, it will delete the existing entry and add the new entry in its place. In the case where multiple ctest_build steps are required, each with their own ctest_submit (PARTS Build) call, use the APPEND keyword argument in all of the ctest_build calls that belong together. The APPEND flag tells CDash to accumulate the results from multiple submissions and display the aggregation of all of them in one row on the dashboard. From CDash's perspective, multiple ctest_build calls (with the same build stamp and subproject and APPEND turned on) result in a single CDash build.<br />
<br />
Adopt some of these tips and techniques in your favorite CMake-based project:<br />
<br />
* LABELS is a new CMake/CTest property that applies to source files, targets and tests. Labels are sent to CDash inside the resulting xml files.<br />
* Use ctest_submit (PARTS) to do incremental submissions. Results are available for viewing on the dashboards sooner. Don't forget to use APPEND in your ctest_build calls when submitting by parts.<br />
* Use INCLUDE_LABEL with ctest_test to run only the tests with labels that match the regular expression.<br />
* Use CTEST_BUILD_TARGET to build your subprojects one at a time, submitting subproject dash boards along the way.<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:MastringCmakeVersion31]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=MastringCmakeVersion31:Chapter_11&diff=5611MastringCmakeVersion31:Chapter 112020-09-21T12:07:44Z<p>Onionmixer: CMAKE Chapter 11</p>
<hr />
<div>==CHAPTER ELEVEN::PORTING CMAKE TO NEW PLATFORMS AND LANGUAGES==<br />
<br />
In order to generate build files for a particular system, CMake needs to determine what system it is running<br />
on and what compiler tool s to use for enabled languages. To do this CMake loads a series of files containing<br />
CMake code from the Modules directory. This all has to happen before the first try-compile or try-run is<br />
executed. To avoid having to re-compute al l of this information for each try-compile and for subsequent runs<br />
of CMake, the discovered values are stored in several configured fi les that are read each time CMake i s run .<br />
These fi les are also copied into the try-compile and try-run directories. Thi s chapter w i l l describe how this<br />
process of system and tool discovery works. An understanding of the process i s necessary to extend CMake<br />
to run on new platforms, and to add support for new languages.<br />
<br />
<br />
===The Determine System Process===<br />
<br />
The first thing CMake needs to do is to determine what platform it is running on and what the target platform is. Except for when you are cross compiling, the host platform and the target platform are identical. The host platform is determined by loading the CMakeDetermineSystem.cmake file. On POSIX systems, "uname" is used to get the name of the system. CMAKE_HOST_SYSTEM_NAME (page 651) is set to the result of uname -s, and CMAKE_HOST_SYSTEM_VERSION (page 652) is set to the result of uname -r. On Windows systems, CMAKE_HOST_SYSTEM_NAME is set to Windows and CMAKE_HOST_SYSTEM_VERSION is set to the value returned by the system function GetVersionEx. The variable CMAKE_HOST_SYSTEM (page 651) is set to a combination of CMAKE_HOST_SYSTEM_NAME and CMAKE_HOST_SYSTEM_VERSION as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
${CMAKE_HOST_SYSTEM_NAME}-${CMAKE_HOST_SYSTEM_VERSION}<br />
</syntaxhighlight><br />
<br />
Additionally, CMake tries to figure out the processor of the host. On POSIX systems it uses uname -m or uname -p to retrieve this information, while on Windows it uses the environment variable PROCESSOR_ARCHITECTURE. CMAKE_HOST_SYSTEM_PROCESSOR (page 651) holds the value of the result.<br />
<br />
Now that CMake has the information about the host that it is running on, it needs to find this information for the target platform. The results will be stored in the CMAKE_SYSTEM_NAME (page 653), CMAKE_SYSTEM_VERSION (page 653), CMAKE_SYSTEM (page 653), and CMAKE_SYSTEM_PROCESSOR (page 653) variables, corresponding to the CMAKE_HOST_SYSTEM_* variables described above. See the "Cross compiling with CMake" chapter on how this is done when cross compiling. In all other cases the CMAKE_SYSTEM_* variables will be set to the value of their corresponding CMAKE_HOST_SYSTEM_* variable.<br />
<br />
Once the CMAKE_SYSTEM information has been determined, CMakeSystem.cmake.in is configured into ${CMAKE_BINARY_DIR}/CMakeFiles/CMakeSyste.cmake. CMake versions prior to 2.6.0 did not support cross compiling, and so only the CMAKE_SYSTEM_* set of variables was available.<br />
<br />
<br />
===The Enable Language Process===<br />
<br />
After the platform has been determined, the next step is to enable all languages specified in the project() (page 327) command. For each language specified, CMake loads CMakeDetermine(LANG)Compiler.cmake where LANG is the name of the language specified in the project() (page 327) command. For example with project(f Fortran), the file is called CMakeDetermineFortranCompiler.cmake. This file discovers the compiler and tools that will be used to compile files for the particular language. Starting with version 2.6.0 CMake tries to identify the compiler for C, C++ and Fortran not only by its filename, but by compiling some source code, which is named CMake(LANG)CompilerId.(LANG_SUFFIX). If this succeeds, it will return a unique id for every compiler supported by CMake. Once the compiler has been determined for a language, CMake configures the file CMake(LANG)Compiler.cmake.in into CMake(LANG)Compiler.cmake.<br />
<br />
After the platform and compiler tools have been determined, CMake loads CMakeSystemSpecificationInformation.cmake which in turn will load ${CMAKE_SYSTEM_NAME}.cmake from the platform subdirectory of modules if it exists for the platform. An example would be SunOS.cmake. This file contains OS specific information about compiler flags, creation of executables, libraries, and object files.<br />
<br />
Next, CMake loads CMake(LANG)Information.cmake for each LANG that was enabled, which loads ${CMAKE_SYSTEM_NAME}-${COMPILER_ID}-LANG-${CMAKE_SYSTEM_PROCESSOR}.cmake if it exists, and after that ${CMAKE_SYSTEM_NAME}-${COMPILER_ID}-LANG.cmake. In these file names COMPILER_ID references the compiler identification determined as described above. The CMake(LANG)Information.cmake file contains default rules for creating executables, libraries, and object files on most UNIX systems. The defaults can be overridden by setting values in either ${CMAKE_SYSTEM_NAME}.cmake or ${CMAKE_SYSTEM_NAME}-${COMPILER_ID}-LANG.cmake.<br />
<br />
${CMAKE_SYSTEM_NAME}-${COMPILER_ID}-LANG-${CMAKE_SYSTEM_PROCESSOR}.cmake is intended to be used only for cross compiling, and is loaded before ${CMAKE_SYSTEM_NAME}-${COMPILER_ID}-LANG.cmake, so variables can be set up which can then be used in the rule variables.<br />
<br />
In addition to the files with the COMPILER_ID in their name, CMake also supports these files using the COMPILER_BASE_NAME. COMPILER_BASE_NAME is the name of the compiler with no path information. For example, cl would be the COMPILER_BASE_NAME for the Microsoft Windows compiler, and Windows-cl.cmake would be loaded. If a COMPILER_ID exists, it will be preferred over the COMPILER_BASE_NAME, since on one side the same compiler can have different names, but there can be also different compilers all with the same name. This means, if<br />
<br />
<syntaxhighlight lang="text"><br />
${CMAKE_SYSTEM_NAME}-${COMPILER_ID}-LANG-${CMAKE_SYSTEM_PROCESSOR}.cmake<br />
</syntaxhighlight><br />
<br />
was not found, CMake tries<br />
<br />
<syntaxhighlight lang="text"><br />
${CMAKE_SYSTEM_NAME}-${COMPILER_BASE_NAME}.cmake<br />
</syntaxhighlight><br />
<br />
and if<br />
<br />
<syntaxhighlight lang="text"><br />
${CMAKE_SYSTEM_NAME}-${COMPILER_BASE_ID}-LANG.cmake<br />
</syntaxhighlight><br />
<br />
was not found, CMake tries<br />
<br />
<syntaxhighlight lang="text"><br />
${CMAKE_SYSTEM_NAME}-${COMPILER_BASE_NAME}.cmake<br />
</syntaxhighlight><br />
<br />
CMake(LANG)Information. cmake and associated Platform files define special CMake variables, called rule variables. A rule variable consists of a list of commands separated by spaces and each enclosed by quotes. In addition to the normal variable expansion performed by CMake, some special tag variables are expanded by the Makefile generator. Tag variables have the syntax of <NAME>, where NAME is the name of the variable. An example rule variable is CMAKE_CXX_CREATE_SHARED_LIBRARY, and the default setting is<br />
<br />
<syntaxhighlight lang="text"><br />
set (CMAKE_CXX_CREATE_SHARED_LIBRARY<br />
"<CMAKE_CXX_COMPILER> <CMAKE SHARED_LIBRARY_CXX_FLAGS><br />
<LINK_FLAGS> <CMAKE_SHARED_LIBRARY_CREATE_CXX_FLAGS><br />
<CMAKE_SHARED_LIBRARY_SONAME_CXX_FLAG><TARGET_SONAME> -o<br />
<TARGET> <OBJECTS> <LINK_LIBRARIES>")<br />
</syntaxhighlight><br />
<br />
At this point, CMake has determined the system it is running on, the tools it will be using to compile the enabled languages, and the rules to use the tools. This means there is enough information for CMake to perform a try-compile. CMake will now test the detected compilers for each enabled language by loading CMakeTest(LANG)Compiler.cmake. This file will usually run a try-compile on a simple source file for the given language to make sure the chosen compiler actually works.<br />
<br />
Once the platform has been determined, and the compilers have been tested, CMake loads a few more files that can be used to change some of the computed values. The first file that is loaded is CMake(PROJECTNAME)Compatibility.cmake, where PROJECTNAME is the name given to the top level PROJECT command in the project. The project compatibility file is used to add backwards compatibility fixes into CMake. For example, if a new version of CMake fails to build a project that the previous version of CMake could build, then fixes can be added on a per project basis to CMake. The last file that is loaded is ${CMAKE_USER_MAKE_RULES_OVERRIDE}. This file is an optionally user supplied variable, that can allow a project to make very specific platform-based changes to the build rules.<br />
<br />
<br />
===Porting to a New Platform===<br />
<br />
Many common platforms are already supported by CMake. However, you may come across a compiler or platform that has not yet been used. If the compiler uses an Integrated Development Environment (IDE), then you will have to extend CMake from the C++ level. However, if the compiler supports a standard make program, then you can specify in CMake the rules to use to compile object code and build libraries by creating CMake configuration files. These files are written using the CMake language with a few special tags that are expanded when the Makefiles are created by CMake. If you run CMake on your system and get a message like the following, you will want to read how to create platform specific settings.<br />
<br />
<syntaxhighlight lang="text"><br />
System is unknown to CMake, create:<br />
Modules/Platform/MySystem.cmake<br />
to use this system, please send your config file to<br />
cmake@www.cmake.org so it can be added to CMake<br />
</syntaxhighlight><br />
<br />
At a minimum you will need to create the Platform/${CMAKE_SYSTEM_NAME}.cmake file for the new platform . Depending on the tools for the platform, you may also want to create Platform/${CMAKE_SYSTEM_NAME}-${COMPILER_BASE_NAME}cmake. On most systems, there is a vendor compiler and the GNU compiler. The rules for both of these compilers can be put in Platform/${CMAKE_SYSTEM_NAME} cmake instead of creating separate files for each of the compilers. For most new systems or compilers, if they follow the basic UNIX compiler fags you will only need to specify the system specific flags for shared library and module creation.<br />
<br />
The following example is from Platform/IRIX.cmake. This file specifies several fags, and also one CMake rule variable. The rule variable tells CMake how to use the IRIX CC compiler to create a static library, which is required for template instantiation to work with IRIX CC.<br />
<br />
<syntaxhighlight lang="text"><br />
# there is no -ldl required on this system<br />
set (CMAKE_DL_LIBS "")<br />
<br />
# Specify the flag to create a shared c library<br />
set (CMAKE_SHARED_LIBRARY_CREATE_C_FLAGS<br />
"-shared -rdata_shared")<br />
<br />
# Specify the flag to create a shared c++ library<br />
set (CMAKE_SHARED_LIBRARY_CREATE_CXX_FLAGS<br />
"-shared -rdata_shared")<br />
<br />
# specify the flag to specify run time paths for shared<br />
# libraries -rpath<br />
set (CMAKE_SHARED_LIBRARY_RUNTIME_C_FLAG "-Wl,-rpath,")<br />
<br />
# specify a separator for paths on the -rpath, if empty<br />
# then -rpath will be repeated.<br />
set (CMAKE_SHARED_LIBRARY_RUNTIME_C_FLAG_SEP "")<br />
<br />
# if the compiler is not GNU, then specify the initial flags<br />
if (NOT CMAKE_COMPILER_IS_GNUCXX)<br />
<br />
# use the CC compiler to create static library<br />
set (CMAKE_CXX_CREATE_STATIC_LIBRARY<br />
"<CMAKE_CXX_COMPILER> -ar -o <TARGET> <OBJECTS>")<br />
<br />
# initializes flags for the native compiler<br />
set (CMAKE_CXX_FLAGS_INIT "")<br />
set (CMAKE_CXX_FLAGS_DEBUG_INIT "-g")<br />
set (CMAKE_CXX_FLAGS_MINSIZEREL_INIT "-O3 -DNDEBUG")<br />
set (CMAKE_CXX_FLAGS_RELEASE_INIT "-02 -DNDEBUG")<br />
set (CMAKE_CXX_FLAGS_RELWITHDEBINFO_INIT "-O2")<br />
endif (NOT CMAKE_COMPILER_IS_GNUCXX)<br />
</syntaxhighlight><br />
<br />
<br />
===Adding a New Language===<br />
<br />
In addition to porting CMake to new platforms, a user may want to add a new language. This can be done either through the use of custom commands, or by defining a new language for CMake. Once a new language is defined, the standard add_library() (page 274) and add_executable() (page 273) commands can be used to create libraries and executables for the new language. To add a new language, you need to create four files. The name LANG has to match-in exact case- the name used in the PROJECT() (page 327) command to enable the language. For example, Fortran has the file CMakeDeterminFortranCompiler.cmake, and it is enabled with a call like this project(f Fortran). The four files are as follows:<br />
<br />
'''CMakeDetermine(LANG)Compiler.cmake''' This file will find the path to the compiler for LANG and then configure CMake(LANG)Compiler.cmake.in.<br />
<br />
'''CMake(LANG)Compiler.cmake.in''' This file should be used as input to a configure file call in the CMakeDetermine(LANG)Compiler.cmake file. It is used to store compiler information and is copied down into try-compile directories so that try compiles do not need to re-detennine and test the LANG.<br />
<br />
'''CMakeTest(LANG)Compiler.cmake''' This should make use of a try compile command to make sure the compiler and tools are working. If the tools are working, the following variable should be set in this way:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CMAKE_(LANG)_COMPILER_WORKS 1 CACHE INTERNAL "")<br />
</syntaxhighlight><br />
<br />
'''CMake(LANG)lnformation.cmake''' Set values for the following rule variables for LAN G :<br />
<br />
<syntaxhighlight lang="text"><br />
</syntaxhighlight><br />
<br />
<br />
===Rule Variable Listing===<br />
<br />
For each language that CMake supports, the following rule variables are expanded into build Makefiles at generation time. LANG is the name used in the PROJECT (name LANG) command. CMake currently supports CXX, C, Fortran, and Java as values for LANG.<br />
<br />
<br />
====General Tag Va riables====<br />
<br />
The following set of variables will be expanded by CMake.<br />
<br />
'''<TARGET>''' The name of the target being built (this may be a full path).<br />
<br />
'''<TARGET_QUOTED>''' The name of the target being built (this may be a full path) double quoted.<br />
<br />
'''<TARGET_BASE>''' This is replaced by the name of the target without a suffix.<br />
<br />
'''<TARGET_SONAME>''' This is replaced by CMAKE_SHARED_LIBRARY_SONAME_(LANG)_FLAG<br />
<br />
'''<OBJECTS>''' This is the list of object files to be linked into the target.<br />
<br />
'''<OBJECTS_QUOTED>''' This is the list of object files to be linked into the target double quoted.<br />
<br />
'''<OBJECT>''' This is the name of the object file to be built.<br />
<br />
'''<LINK_LIBRARIES>''' This is the list of libraries that are linked into an executable or shared object.<br />
<br />
'''<FLAGS>''' This contains the command line fags for the linker or compiler.<br />
<br />
'''<LINK_FLAGS>''' These are the fags used at link time.<br />
<br />
'''<SOURCE>''' The source file name.<br />
<br />
<br />
====Language Specific Information====<br />
<br />
The following set of variables related to the compiler tools will also be expanded.<br />
<br />
'''<CMAKE_(LANG)_COMPILER>''' This is the (LANG) compiler command.<br />
<br />
'''<CMAKE_SHARED_LIBRARY_CREATE_(LANG)_FLAGS>''' These are the fags used to create a shared library for (LANG) code.<br />
<br />
'''<CMAKE_SHARED_MODULE_CREATE_(LANG)_FLAGS>''' These are the fags used to create a shared module for (LANG) code.<br />
<br />
'''<CMAKE_(LANG)_LINK_FLAGS>''' These are the fags used to link a (LANG) program.<br />
<br />
'''<CMAKE_AR>''' This is the command to create a .a archive file.<br />
<br />
'''<CMAKE_RANLIB>''' This is the command to ranlib a.a archive file.<br />
<br />
<br />
===Compiler and Platform Examples===<br />
<br />
====Como Compiler====<br />
<br />
A good example to look at is the como compiler on Linux, found in Modules/Platforms/Linux-como.cmake. This compiler requires several non-standard commands when creating libraries and executables in order to instantiate C++ templates.<br />
<br />
<syntaxhighlight lang="text"><br />
# create a shared C++ library<br />
set (CMAKE CXX_CREATE_SHARED_LIBRARY<br />
"<CMAKE CXX_COMPILER> --prelink_objects <OBJECTS>"<br />
"<CMAKE_CXX_COMPILER><br />
<CMAKE_SHARED_LIBRARY_CREATE_CXX_FLAGS> <LINK_FLAGS> -o <TARGET><br />
<OBJECTS> <LINK_LIBRARIES>")<br />
<br />
# create a C++ static library<br />
set (CMAKE_CXX_CREATE_STATIC_LIBRARY<br />
"<CMAKE_CXX_COMPILER> --prelink_objects <OBJECTS>"<br />
"<CMAKE_AR> cr <TARGET> <LINK_FLAGS> <OBJECTS>"<br />
"<CMAKE_RANLIB> <TARGET> ")<br />
<br />
set (CMAKE_CXX_LINK_EXECUTABLE<br />
"<CMAKE_CXX_COMPILER> --prelink_objects <OBJECTS>"<br />
"<CMAKE_CXX_COMPILER> <CMAKE_CXX_LINK_FLAGS> <LINK_FLAGS><br />
<FLAGS> <OBJECTS> -o <TARGET> <LINK_LIBRARIES>")<br />
set (CMAKE_SHARED_LIBRARY_RUNTIME_FLAG "")<br />
set (CMAKE_SHARED_LIBRARY_C_FLAGS "")<br />
set (CMAKE_SHARED_LIBRARY_LINK_FLAGS "")<br />
</syntaxhighlight><br />
<br />
This overrides the creation of libraries (shared and static), and the linking of executable C++ programs. You can see that the linking process of executables and shared libraries requires an extra command that calls the compiler with the flag --prelink_objects, and gets all of the object files passed to it.<br />
<br />
<br />
====Borland Compiler====<br />
<br />
The full Borland compiler rules can be found in Platforms/Windows-bcc32.cmake. The following code is an excerpt from that file, showing some of the features used to define rules for the Borland compiler set.<br />
<br />
<syntaxhighlight lang="text"><br />
set (CMAKE_CXX_CREATE_SHARED_LIBRARY<br />
"<CMAKE_CXX_COMPILER> ${CMAKE_START_TEMP_FILE}-e<TARGET><br />
-tWD <LINK_FLAGS> -tWR <LINK_LIBRARIES> <OBJECTS>S$ {CMAKE_END_TEMP_FILE}"<br />
"implib <c -w <TARGET_BASE>.lib <TARGET_BASE>.dll"<br />
)<br />
<br />
set (CMAKE_CXX_CREATE_SHARED_MODULE<br />
${CMAKE_CXX_CREATE_SHARED_LIBRARY})<br />
<br />
# create a C shared library<br />
set (CMAKE_C_CREATE_SHARED_LIBRARY<br />
"<CMAKE_C_COMPILER> ${CMAKE_START_TEMP_FILE}-e<TARGET> -tWD<br />
<LINK_FLAGS> -tWR <LINK_LIBRARIES><br />
<OBJECTS>${CMAKE_END_TEMP_FILE}"<br />
"implib -c -w <TARGET_BASE>.lib <TARGET_BASE>.dll"<br />
)<br />
<br />
# create a C++ static library<br />
set (CMAKE_CXX_CREATE_STATIC_LIBRARY "tlib<br />
${CMAKE_START_TEMP_FILE}/p512 <LINK_FLAGS> /a <TARGET_QUOTED><br />
<OBJECTS_QUOTED>${CMAKE_END_TEMP_FILE}")<br />
<br />
# compile a C++ file into an object file<br />
set (CMAKE_CXX_COMPILE_OBJECT<br />
"<CMAKE_CXX_COMPILER> ${CMAKE_START_TEMP_FILE}-DWIN32 -P<br />
<FLAGS> -o<OBJECT> -c <SOURCE>${CMAKE_END_TEMP_FILE}")<br />
</syntaxhighlight><br />
<br />
<br />
===Extending CMake===<br />
<br />
Occasionally you will come across a situation where you want to do something during your build process that CMake cannot seem to handle. Examples of this include creating wrappers for C++ classes to make them available to other languages, or creating bindings for C++ classes to support runtime introspection. In these cases you may want to extend CMake by adding your own commands. CMake supports this capability through its C plugin API Using this API, a project can extend CMake to add specialized commands to handle project-specific tasks.<br />
<br />
A loaded command in CMake is essentially a C code plugin that is compiled into a shared library (a.k.a. DLL). This shared library can then be loaded into the running CMake to provide the functionality of the loaded command. Creating a loaded command is a two step process. You must first write the C code and CMakeLists file for the command, and then place it in your source tree. Next you must modify your project's CMakeLists file to compile the loaded command and load it. We will start by looking at writing the plugin. Before resorting to creating a loaded command, you should first see if you can accomplish what you want with a macro. With the commands in CMake a macro/function has almost the same level of flexibility as a loaded command, but does not require compilation or as much complexity. You can almost always, and should, use a macro/function instead of a loaded command.<br />
<br />
<br />
====Creating a Loaded Command====<br />
<br />
While CMake itself is written in C++, we suggest that you write your plugins using only C code. This avoids a number of portability and compiler issues that can plague C++ plugins being loaded into CMake executables. The API for a plugin is defined in the header file cmCPluginAPI.h. This file defines all of the CMake functions that you can invoke from your plugin. It also defines the cmLoadedCommandInfo structure that is passed to a plugin. Before going into detail about these functions, consider the following simple plugin:<br />
<br />
<syntaxhighlight lang="text"><br />
#include "cmCPluginAPI.h"<br />
Static int InitialPass (void *inf, void *mf,<br />
int argc, char *argv[})<br />
{<br />
cmLoadedCommandinfo *info = (cmLoadedCommandinfo *) inf;<br />
info->CAPI->AddDefinition(mf, "FOO", "BAR");<br />
<br />
return 1;<br />
}<br />
<br />
void CM_PLUGIN_EXPORT<br />
HELLO WORLDInit (cmLoadedCommandinfo *info)<br />
{<br />
info->InitialPass = InitialPass;<br />
info->Name = "HELLO WORLD";<br />
}<br />
</syntaxhighlight><br />
<br />
First this plugin includes the cmCPluginAPI.h file to get the definitions and structures required for a plugin. Next it defines a static function called InitialPass that will be called whenever this loaded command is invoked. This function is always passed four parameters: the cmLoadedCommandInfo structure, the Makefile, the number of arguments, and the list of arguments. Inside this function, we typecast the inf argument to its actual type and then use it to invoke the C API(CAPI) AddDefinition function. This function will set the variable FOO to the value of BAR in the current cmMakefile instance.<br />
<br />
The second function is called HELLO_WORLD init, and it will be called when the plugin is loaded. The name of this function must exactly match the name of the loaded command with Init appended. In this example the name of the command is HELLO_WORLD, so the function is named HELLO_WORLD init. This function will be called as soon as your command is loaded. It is responsible for initializing the elements of the cmLoadedCommandInfo structure. In this example it sets the InitialPass member to the address of the InitialPass function defined above. It will then set the name of the command by setting the Name member to HELLO_WORLD.<br />
<br />
<br />
====Using a Loaded Command====<br />
<br />
Now let us consider how to use this new HELLO_WORLD command in a project. The basic process is that CMake will have to compile the plugin into a shared library and then dynamically load it. To do this you first create a subdirectory in your project's source tree called CMake or CMakeCommands (by convention, any name can be used). Place the source code to your plugin in that directory. We recommend naming the file with the prefix cm and then the name of the command. For example, cmHELLO_WORLD.c. Then you must create a simple CMakeLists.txt file for this directory that includes instructions to build the shared library. Typically this will be the following<br />
<br />
<syntaxhighlight lang="text"><br />
project (HELLO_WORLD)<br />
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}"<br />
"${(CMAKE_ANSI_CXXFLAGS}"<br />
)<br />
<br />
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS}"<br />
"${CMAKE_ANSI_CFLAGS}"<br />
)<br />
<br />
include_directories (${CMAKE_ROOT}/include<br />
${CMAKE_ROOT}/Source<br />
)<br />
<br />
add_library (cmHELLO_WORLD MODULE cmHELLO_WORLD.c)<br />
</syntaxhighlight><br />
<br />
It is critical that you name the library cm, followed by the name of the command as shown in the add_library call in the above example (e.g. cmHELLO_WORLD). When CMake loads a command it assumes that the command is in a library named using that pattern. The next step is to modify your project's main CMakeLists file to compile and load the plugin. This can be accomplished with the following code:<br />
<br />
<syntaxhighlight lang="text"><br />
# if the command has not been loaded, compile and load it<br />
if (NOT COMMAND HELLO WORLD)<br />
<br />
# try compiling it first<br />
try_compile (COMPILE_OK<br />
${PROJECT_BINARY_DIR}/CMake<br />
${PROJECT_SOURCE_DIR}/CMake<br />
HELLO WORLD<br />
)<br />
<br />
# if it compiled OK then load it<br />
if (COMPILE_OK)<br />
load_command (HELLO WORLD<br />
${PROJECT_BINARY_DIR}/CMake<br />
${PROJECT_BINARY_DIR}/CMake/Debug<br />
)<br />
<br />
# if it did not compile OK, then display an error<br />
else (COMPILE_OK)<br />
message ("error compiling HELLO_WORLD extension")<br />
endif (COMPILE_OK)<br />
<br />
endif (NOT COMMAND HELLO WORLD)<br />
</syntaxhighlight><br />
<br />
<br />
In the above example you would simply replace HELLO_WORLD with the name of your command and replace<br />
${PROJECT_SOURCE_DIR}/CMake with the actual name of the subdirectory where you placed your<br />
loaded command. Now, let us look at creating loaded commands in more detail. We will start by looking at<br />
the cmLoadedCommandInfo structure.<br />
<br />
<syntaxhighlight lang="text"><br />
typedef const char* (*CM_DOC_FUNCTION) ();<br />
<br />
typedef int (*CM_INITIAL_PASS FUNCTION) (<br />
void *info, void *mf, int argc, char *[]);<br />
<br />
typedef void (*CM_FINAL_PASS_FUNCTION) (<br />
void *info, void *mf);<br />
typedef void (*CM_DESTRUCTOR_FUNCTION) (void *info);<br />
<br />
typedef struct {<br />
unsigned long reserved1;<br />
unsigned long reserved2;<br />
cmCAPI *CAPI;<br />
int m_Inherited;<br />
CM_INITIAL_PASS_FUNCTION InitialPass;<br />
CM_FINAL_PASS_FUNCTION FinalPass;<br />
CM_DESTRUCTOR_FUNCTION Destructor;<br />
CM_DOC_FUNCTION GetTerseDocumentation;<br />
CM_DOC_FUNCTION GetFullDocumentation;<br />
const char *Name;<br />
char *Error;<br />
void *ClientData;<br />
} cmLoadedCommandinfo;<br />
</syntaxhighlight><br />
<br />
<br />
The first two entries of the structure are reserved for future use. The next entry, CAPI, is a pointer to a structure containing pointers to all the CMake functions you can invoke from a plugin. The m_Inherited member only applies to CMake versions 2.0 and earlier. It can be set to indicate if this command should be inherited by subdirectories or not. If you are creating a command that will work with versions of CMake prior to 2.2 then you probably want to set this to zero. The next five members are pointers to functions that your plugin may provide. The InitialPass function must be provided, and it is invoked whenever your loaded command is invoked from a CMakeLists file. The FinalPass function is optional, and is invoked after configuration but before generation of the output. The Destructor function is optional, and will be invoked when your command is destroyed by CMake (typically on exit). It can be used to clean up any memory that you have allocated in the InitialPass or FinalPass. The next two functions are optional, and are used to provide documentation for your command. The Name member is used to store the name of your command. This is what will be compared against when parsing a CMakeLists file. It should be in all caps in keeping with CMake's naming conventions. The Error and ClientData members are used internally by CMake; you should not directly access them. Instead you can use the CAPI functions to manipulate them.<br />
<br />
Let us consider some of the common CAPI functions you will use from within a loaded command. First, we<br />
will consider some utility functions that are provided specifically for loaded commands. Since loaded com<br />
mands use a C interface they will receive arguments as (int argc, char *argv[]). For convenience,<br />
you can call GetTotalArgumentSize(argc, argv), which will return the total length of all the arguments. <br />
Likewise, some CAPI methods will return an (argc, argv) pair that you will be responsible for<br />
freeing. The FreeArguments(argc, argv) function can be used to free such return values. If your<br />
loaded command has a FinalPass(), then you might want to pass data from the InitialPass() to<br />
the FinalPass() invocation. This can be accomplished using the SetClientData(void *info, void *data) <br />
and void *GetClientData(void *info) functions. Since the client data is passed<br />
as a void *argument, any client data larger than a pointer must be allocated and then finally freed in your<br />
Destructor() function. Be aware that CMake will create multiple instances of your loaded command so<br />
using global variables or static variables is not recommended. If you should encounter an error in executing<br />
your loaded command, you can call SetError(void *info, const char *errorSteing) to pass an error message on to the user.<br />
<br />
Another group of CAPI functions worth noting are the cmSourceFile functions. cmSourceFile is<br />
a C++ object that represents information about a single file including its full path, file extension, <br />
special compiler flags, etc. Some loaded commands will need to either create or access cmSourceFile<br />
instances. This can be done using the void *CreateSourceFile() and void *GetSource<br />
(void *mf, constchar *sourceName) functions. Both of these functions return a pointer to a<br />
cmSourceFile as a void *return value. This pointer can then be passed into other functions that manipulate <br />
cmSourceFiles such as SourceFileGetProperty() or SourceFileSetProperty().<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:MastringCmakeVersion31]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=MastringCmakeVersion31:Chapter_10&diff=5610MastringCmakeVersion31:Chapter 102020-09-21T12:07:07Z<p>Onionmixer: 미교정 부분 수정</p>
<hr />
<div>==CHAPTER TEN::AUTOMATION & TESTING WITH CMAKE==<br />
<br />
===Testing with CMake, CTest, and CDash===<br />
<br />
Testing is a key tool for producing and maintaining robust, valid software. This chapter will examine the tools that are part of CMake to support software testing. We will begin with a brief discussion of testing approaches, and then discuss how to add tests to your software project using CMake. Finally we will look at additional tools that support creating centralized software status dashboards.<br />
<br />
The tests for a software package may take a number of forms. At the most basic level there are smoke tests, such as one that simply verifies that the software compiles. While this may seem like a simple test, with the wide variety of platforms and configurations available, smoke tests catch more problems than any other type of test. Another form of smoke test is to verify that a test runs without crashing. This can be handy for situations where the developer does not want to spend the time creating more complex tests, but is willing to run some simple tests. Most of the time these simple tests can be small example programs. Running them verifies not only that the build was successful, but that any required shared libraries can be loaded (for projects that use them), and that at least some of the code can be executed without crashing.<br />
<br />
Moving beyond basic smoke tests leads to more specific tests such as regression, black-, and white-box testing. Each of these has its strengths. Regression testing verifies that the results of a test do not change over time or platform. This is very useful when performed frequently, as it provides a quick check that the behavior and results of the software have not changed. When a regression test fails, a quick look at recent code changes can usually identify the culprit. Unfortunately, regression tests typically require more effort to create than other tests.<br />
<br />
White- and black-box testing refer to tests written to exercise units of code (at various levels of integration), with and without knowledge of how those units are implemented respectively. White-box testing is designed to stress potential failure points in the code knowing how that code was written, and hence its weaknesses. As with regression testing, this can take a substantial amount of effort to create good tests. Black-box testing typically knows little or nothing about the implementation of the software other than its public API. Black-box testing can provide a lot of code coverage without too much effort in developing the tests. This is especially true for libraries of object oriented software where the APis are well defined. A black-box test can be written to go through and invoke a number of typical methods on all the classes in the software.<br />
<br />
The final type of testing we will discuss is software standard compliance testing. While the other test types we have discussed are focused on determining if the code works properly, compliance testing tries to determine if the code adheres to the coding standards of the software project. This could be a check to verify that all classes have implemented some key method, or that all functions have a common prefix. The options for this type of test are limitless and there are a number of ways to perform such testing. There are software analysis tools that can be used, or specialized test programs (maybe python scripts etc) could be written. The key point to realize is that the tests do not necessarily have to involve running some part of the software. The tests might run some other tool on the source code itself.<br />
<br />
There are a number of reasons why it helps to have testing support integrated into the build process. First, complex software projects may have a number of configuration or platform-dependent options. The build system knows what options can be enabled and can then enable the appropriate tests for those options. For example, the Visualization Toolkit (VTK) includes support for a parallel processing library called MPI. If VTK is built with MPI support then additional tests are enabled that make use of MPI and verify that the MPI-specific code in VTK works as expected. Secondly, the build system knows where the executables will be placed, and it has tools for finding other required executables (such as perl, python etc). The third reason is that with UNIX Makefiles it is common to have a test target in the Makefile so that developers can type make test and have the test(s) run. In order for this to work, the build system must have some knowledge of the testing process.<br />
<br />
<br />
===How Does CMake Facilitate Testing?===<br />
<br />
CMake facilitates testing your software through special testing commands and the CTest executable. First, we will discuss the key testing commands in CMake. To add testing to a CMake-based project, simply include(CTest) (page 317) and use the add_test() (page 277) command. The add_test command has a simple syntax as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
add_test (NAME TestName COMMAND ExecutableToRun arg1 arg2 ...)<br />
</syntaxhighlight><br />
<br />
The first argument is simply a string name for the test. This is the name that will be displayed by testing<br />
programs. The second argument is the executable to run. The executable can be built as part of the project<br />
or it can be a standalone executable such as python, perl , etc. The remaining arguments will be passed to the<br />
running executable. A typical example of testing using the a dd_t e s t command would look l ike this:<br />
<br />
<syntaxhighlight lang="text"><br />
add_executable (TestInstantiator TestInstantiator.cxx)<br />
target_link_libraries (TestInstantiator vtkCommon)<br />
add_test (NAME TestInstantiator<br />
COMMAND TestInstantiator)<br />
</syntaxhighlight><br />
<br />
The add_test command is typically placed in the CMakeLists file for the directory that has the test in it. For large projects, there may be multiple CMakeLists files with add_test commands in them. Once the add_test commands are present in the project, the user can run the tests by invoking the "test" target of Makefile, or the RUN_TESTS target of Visual Studio or Xcode. An example of running tests on the CMake tests using the Makefile generator on Linux would be:<br />
<br />
<syntaxhighlight lang="text"><br />
$ make test<br />
Running tests...<br />
Test project<br />
Start 2: kwsys.testEncode<br />
1/20 Test #2: kwsys.testEncode .......... Passed 0.02 sec<br />
Start 3: kwsys.testTerminal<br />
2/20 Test #3: kwsys.testTerminal ........ Passed 0.02 sec<br />
Start 4: kwsys.testAutoPtr<br />
3/20 Test #4: kwsys.testAutoPtr ......... Passed 0.02 sec<br />
</syntaxhighlight><br />
<br />
<br />
===Additional Test Properties===<br />
<br />
By default a test passes if all of the following conditions are true:<br />
<br />
* The test executable was found<br />
* The test ran without exception<br />
* The test exited with return code 0<br />
<br />
That said, these behaviors can be modified using the set_property() (page 329) command:<br />
<br />
<syntaxhighlight lang="text"><br />
set_property (TEST test_name<br />
PROPERTY prop1 value1 value2 ...)<br />
</syntaxhighlight><br />
<br />
This command will set additional properties for the specified tests. Example properties are:<br />
<br />
'''ENVIRONMENT''' Specifies environment variables that should be defined for running a test. If set to a list of environment variables and values of the form MYVAR=value, those environment variables will be defined while the test is running. The environment is restored to its previous state after the test is done.<br />
<br />
'''LABELS''' Specifies a list of text labels associated with a test. These labels can be used to group tests together based on what they test. For example, you could add a label of MPI to all tests that exercise MPI code.<br />
<br />
'''WILL_FAIL''' If this option is set to true, then the test will pass if the return code is not 0, and fail if it is. This reverses the third condition of the pass requirements.<br />
<br />
'''PASS_REGULAR_EXPRESSION''' If this option is specified, then the output of the test is checked against the regular expression provided (a list of regular expressions may be passed in as well). If none of the regular expressions match, then the test will fail. If at least one of them m atches, then the test will pass.<br />
<br />
'''FAIL_REGULAR_EXPRESSION''' If this option is specified, then the output of the test is checked against the regular expression provided (a list of regular expressions may be passed in as well). If none of the regular expressions match, then the test will pass. If at least one of them matches, then the test will fail .<br />
<br />
If both PASS_REGULAR_EXPRESSION (page 614) and FAIL_REGULAR_EXPRESSION (page 613) are specified, then the FAIL_REGULAR_EXPRESSION takes precedence. The following example illustrates using the PASS_REGULAR_EXPRESSION and FAIL_REGULAR_EXPRESSION:<br />
<br />
<syntaxhighlight lang="text"><br />
add_test (NAME outputTest COMMAND outputTest)<br />
set (passRegex "^Test passed" "^*All ok")<br />
set (failRegex "Error" "Fail")<br />
<br />
set_property (TEST outputTest<br />
PROPERTY PASS_REGULAR_EXPRESSION "${passRegex}")<br />
set_property (TEST outputTest<br />
PROPERTY FAIL_REGULAR_EXPRESSION "${failRegex}")<br />
</syntaxhighlight><br />
<br />
<br />
===Testing Using CTest===<br />
<br />
When you run the tests from your build environment, what really happens is that the build environment runs CTest. CTest is an executable that comes with CMake; it handles running the tests for the project. While CTest works well with CMake, you do not have to use CMake in order to use CTest. The main input file for CTest is called CTestTestfile.cmake. This file will be created in each directory that was processed by CMake (typically every directory with a CMakeLists file). The syntax of CTestTestfile.cmake is like the regular CMake syntax, with a subset of the commands available. If CMake is used to generate testing files, they will list any subdirectories that need to be processed as well as any add_test() (page 277) calls. The subdirectories are those that were added by subdirs() (page 350) or add_subdirectory() (page 277) commands. CTest can then parse these files to determine what tests to run. An example of such a file is shown below:<br />
<br />
<syntaxhighlight lang="text"><br />
# CMake generated Testfile for<br />
# Source directory: C:/CMake<br />
# Build directory: C:/CMakeBin<br />
#<br />
# This file includes the relevent testing commands required<br />
# for testing this directory and lists subdirectories to<br />
# be tested as well.<br />
<br />
ADD_TEST (SystemInformationNew ...)<br />
<br />
SUBDIRS (Source/kwsys)<br />
SUBDIRS (Utilities/cmzlib)<br />
...<br />
</syntaxhighlight><br />
<br />
When CTest parses the CTestTestfile.cmake files, it will extract the list of tests from them. These tests will be run, and for each test CTest will display the name of the test and its status. Consider the following sample output:<br />
<br />
<syntaxhighlight lang="text"><br />
$ ctest<br />
Test project C:/CMake-build26<br />
Start 1: SystemInformat ionNew<br />
1/21 Test #1: SystemInformationNew ...... Passed 5.78 sec<br />
Start 2: kwsys.testEncode<br />
2/21 Test #2: kwsys.testEncode .......... Passed 0.02 sec<br />
Start 3: kwsys.testTerminal<br />
3/21 Test #3: kwsys.testTerminal ........ Passed 0.00 sec<br />
Start 4: kwsys.testAutoPtr<br />
4/21 Test #4: kwsys.testAutoPtr ......... Passed 0.02 sec<br />
Start 5: kwsys.testHashSTL<br />
5/21 Test #5: kwsys.testHashSTL ......... Passed 0.02 sec<br />
...<br />
100% tests passed, 0 tests failed out of 21<br />
Total Test time (real) = 59.22 sec<br />
</syntaxhighlight><br />
<br />
CTest is run from within your build tree. It will run all the tests found in the current directory as well as any subdirectories listed in the CTestTestfile.cmake. For each test that is run CTest will report if the test passed and how long it took to run the test.<br />
<br />
The CTest executable includes some handy command line options to make testing a little easier. We will start by looking at the options you would typically use from the command line.<br />
<br />
<syntaxhighlight lang="text"><br />
-R <regex> Run tests matching regular expression<br />
~E <regex> Exclude tests matching regular expression<br />
-L <regex> Run tests with labels matching the regex<br />
-LE <regex> Run tests with labels not matching regexp<br />
-C <config> Choose the configuration to test<br />
-V, --verbose Enable verbose output from tests.<br />
-N, --show-only Disable actual execution of tests.<br />
-I (Start,End,Stride,test#,test#|Test file]<br />
Run specific tests by range and number.<br />
-H Display a help message<br />
</syntaxhighlight><br />
<br />
The -R option is probably the most commonly used. It allows you to specify a regular expression; only the tests with names matching the regular expression will be run . Using the -R option with the name (or part of the name) of a test is a quick way to run a single test. The -E option is similar except that it excludes all tests matching the regular expression. The -L and -LE options are similar to -R and -E, except that they apply to test labels that were set using the set_property() (page 329) command as described in section 0. The -C option is mainly for IDE builds where you might have multiple configurations, such as Release and Debug in the same tree. The argument following the -C determines which configuration will be tested. The -V argument is useful when you are trying to determine why a test is failing. With -V, CTest will print out the command line used to run the test, as well as any output from the test itself. The -V option can be used with any invocation of CTest to provide more verbose output. The -N option is useful if you want to see what tests CTest would run without actually running them.<br />
<br />
Running the tests and making sure they all pass before committing any changes to the software is a sure-fire way to improve your software quality and development process. Unfortunately, for large projects the number of tests and the time required to run them may be prohibitive. In these situations the -I option of CTest can be used. The -I option allows you to flexibly specify a subset of the tests to run. For example, the following invocation of CTest will run every seventh test.<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -I ,,7<br />
</syntaxhighlight><br />
<br />
While this is not as good as running every test, it is better than not running any and it may be a more practical solution for many developers. Note that if the start and end arguments are not specified, as in this example, then they will default to the first and last tests. In another example, assume that you always want to run a few tests plus a subset of the others. In this case you can explicitly add those tests to the end of the arguments for -I . For example:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -I ,,5,1,2,3,10<br />
</syntaxhighlight><br />
<br />
will run tests 1,2,3, and 10, plus every fifth test. You can pass as many test numbers as you want after the stride argument.<br />
<br />
<br />
===Using CTest to Drive Complex Tests===<br />
<br />
Sometimes to properly test a project you need to actually compile code during the testing phase. There are several reasons for this. First, if test programs are compiled as part of the main project, they can end up taking up a significant amount of the build time. Also, if a test fails to build, the main build should not fail as well. Finally, IDE projects can quickly become too large to load and work with. The CTest command supports a group of command line options that allow it to be used as the test executable to run. When used as the test executable, CTest can run CMake, run the compile step, and finally run a compiled test. We will now look at the command line options to CTest that support bui lding and running tests.<br />
<br />
<syntaxhighlight lang="text"><br />
--build-and-test src_directory build_directory<br />
Run cmake on the given source directory using the specified build directory.<br />
--test-command Name of the program to run.<br />
--build-target Specify a specific target to build.<br />
--build-nocmake Run the build without running cmake first.<br />
--build-run-dir Specify directory to run programs from.<br />
--build-two-config Run cmake twice before the build.<br />
--build-exe-dir Specify the directory for the executable.<br />
--build-generator Specify the generator to use.<br />
--build-project Specify the name of the project to build.<br />
--build-makeprogram Specify the make program to use.<br />
--build-noclean Skip the make clean step,<br />
--build-options Add extra options to the build step.<br />
</syntaxhighlight><br />
<br />
For an example, consider the following add_test() (page 277) command taken from the CMakeLists.txt file of CMake itself. It shows how CTest can be used both to compile and run a test.<br />
<br />
<syntaxhighlight lang="text"><br />
add_test (simple ${CMAKE_CTEST_COMMAND }<br />
--build-and-test "${CMAKE_SOURCE_DIR}/Tests/Simple"<br />
"${CMAKE BINARY_DIR}/Tests/Simple"<br />
--build-generator ${CMAKE_GENERATOR}<br />
--build-makeprogram ${CMAKE_MAKE_PROGRAM}<br />
--build-project Simple<br />
--test-command simple)<br />
</syntaxhighlight><br />
<br />
In this example, the add_test command is first passed the name of the test, "simple". After the name of the test, the command to be run is specified, In this case, the test command to be run is CTest. The CTest command is referenced via the CMAKE_CTEST_COMMAND (page 626) variable. This variable is always set by CMake to the CTest command that came from the CMake installation used to build the project. Next, the source and binary directories are specified. The next options to CTest are the -build-generator and -build-makeprogram options. These are specified using the CMake variables CMAKE_MAKE_PROGRAM (page 630) and CMAKE_GENERATOR (page 628). Both CMAKE_MAKE_PROGRAM and CMAKE_GENERATOR are defined by CMake. This is an important step as it makes sure that the same generator is used for building the test as was used for building the project itself. The -build-project option is passed Simple, which corresponds to the project() (page 327) command used in the Simple test. The final argument is the -test-command which tells CTest the command to run once it gets a successful build, and should be the name of the executable that will be compiled by the test.<br />
<br />
<br />
===Handling a Large Number of Tests===<br />
<br />
When a large number of tests exist in a single project, it is cumbersome to have individual executables available for each test. That said, the developer of the project should not be required to create tests with complex argument parsing. This is why CMake provides a convenience command for creating a test driver program. This command is called create_test_sourcelist() (page 282). A test driver is a program that links together many small tests into a single executable. This is useful when building static executables with large libraries to shrink the total required size. The signature for create_test_sourcelist is as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
create_test_sourcelist (SourceListName<br />
DriverName<br />
test1 test2 test3<br />
EXTRA_INCLUDE include.h<br />
FUNCTION function<br />
)<br />
</syntaxhighlight><br />
<br />
The first argument is the variable which will contain the list of source files that must be compiled to make the test executable. The DriverName is the name of the test driver program (e.g. the name of the resulting executable). The rest of the arguments consist of a list of test source files. Each test source file should have a function in it that has the same name as the file with no extension (foo.cxx should have int foo(argc,argv);). The resulting executable will be able to invoke each of the tests by name on the command line. The EXTRA_INCLUDE and FUNCTION arguments support additional customization of the test driver program. Consider the following CMakeLists file fragment to see how this command can be used:<br />
<br />
<syntaxhighlight lang="text"><br />
# create the testing file and list of tests<br />
create_test_sourcelist (Tests<br />
CommonCxxTests.cxx<br />
ObjectFactory.cxx<br />
otherArrays.cxx<br />
otherEmptyCell.cxx<br />
TestSmartPointer.cxx<br />
SystemInformation.cxx<br />
)<br />
<br />
# add the executable<br />
add_executable (CommoncCxxTests ${Tests})<br />
<br />
# remove the test driver source file<br />
set (TestsToRun ${Tests})<br />
remove (TestsToRun CommonCxxTests.cxx)<br />
<br />
# Add all the ADD_TEST for each test<br />
foreach (test ${TestsToRun})<br />
get_filename_component (TName ${test} NAME_WE)<br />
add_test (NAME ${TName} COMMAND CommonCxxTests ${TName})<br />
endforeach ()<br />
</syntaxhighlight><br />
<br />
The create_test_sourcelist command is invoked to create a test driver. In this case it creates and writes CommonCxxTests.cxx into the binary tree of the project, using the rest of the arguments to determine its contents. Next, the add_executable() (page 273) command is used to add that executable to the build. Then a new variable called TestsToRun is created with an initial value of the sources required for the test driver. The remove() (page 349) command is used to remove the driver program itself from the list. Then, a foreach() (page 309) command is used to loop over the remaining sources. For each source, its name without a file extension is extracted and put in the variable TName, then a new test is added for TName. The end result is that for each source file in the create_test_sourcelist an add_test command is called with the name of the test. As more tests are added to the create_test_sourcelist command, the foreach loop will automatically call add_test for each one.<br />
<br />
<br />
===Managing Test Data===<br />
<br />
In addition to handling large numbers of tests, CMake contains a system for managing test data. It is en capsulated in an ExtemalData CMake module, downloads large data on an as-needed basis, retains version information, and allows distributed storage.<br />
<br />
The design of the ExtemalData follows that of distributed version control systems using hash-based file identifiers and object stores, but it also takes advantage of the presence of a dependency-based build system. The figure below illustrates the approach. Source trees contain lighweight "content links" referencing data in remote storage by hashes of their content. The ExtemalData module produces build rules to download the data to local stores and reference them from build trees by symbolic links (copies on Windows).<br />
<br />
A content link is a small, plain text file containing a hash of the real data. Its name is the same as its data file, with an additional extension identifying the hash algorithm e.g. img.png.md5. Content links always take the same (small) amount of space in the source tree regardless of the real data size. The CMakeLists.txt CMake configuration files refer to data using a DATA{} syntax inside calls to the ExternalData module API For example, DATA{img.png} tells the ExternalData module to make img.png available in the build tree even if only a img.png.md5 content link appears in the source tree.<br />
<br />
<<Figure 10.1: ExternalData module flow chart>><br />
<br />
The ExternalData module implements a flexible system to prevent duplication of content fetching and storage. Objects are retrieved from a list of (possibly redundant) local and remote locations specified in the ExtemalData CMake configuration as a list of "URL templates". The only requirement of remote storage systems is the ability to fetch from a URL that locates content through specification of the hash algorithm and hash value. Local or networked file systems, an Apache FTP server or a Midas<ref>http://www.midasplatform.org</ref> server, for example, all have this capability. Each URL template has %(algo) and %(hash) placeholders for ExternalData to replace with values from a content link.<br />
<br />
A persistent local object store can cache downloaded content to share among build trees by setting the ExternalData_OBJECT_STORES CMake build configuration variable. This is helpful to de-duplicate content for multiple build trees. It also resolves an important pragmatic concern in a regression testing context; when many machines simultaneously start a nightly dashboard build, they can use their local object store instead of overloading the data servers and flooding network traffic.<br />
<br />
Retrieval is integrated with a dependency-based build system, so resources are fetched only when needed. For example, if the system is used to retrieve testing data and BUILD_TESTING is OFF, the data are not retrieved unnecessarily. When the source tree is updated and a content link changes, the build system fetches the new data as needed.<br />
<br />
Since all references leaving the source tree go through hashes, they do not depend on any external state. Remote and local object stores can be relocated without invalidating content links in older versions of the source code. Content links within a source tree can be relocated or renamed without modifying the object stores. Duplicate content links can exist in a source tree, but download will only occur once. Multiple versions of data with the same source tree file name in a project's history are uniquely identified in the object stores.<br />
<br />
Hash-based systems allow the use of untrusted connections to remote resources because downloaded content is verified after it is retrieved. Configuration of the URL templates list improves robustness by allowing multiple redundant remote storage resources. Storage resources can also change over time on an as-needed basis. If a project's remote storage moves over time, a build of older source code versions is always possible by adjusting the URL templates configured for the build tree or by manually populating a local object store.<br />
<br />
A simple application of the ExternalData module looks like the following:<br />
<br />
<syntaxhighlight lang="text"><br />
include (ExternalData)<br />
set (midas "http://midas.kitware.com/MyProject")<br />
<br />
<br />
# Add standard remote object stores to user's<br />
# configuration.<br />
list (APPEND ExternalData_URL_ TEMPLATES<br />
"${midas} ?algorithm=%(algo)&hash=%(hash)"<br />
"ftp://myproject.org/files/%(algo)/%(hash)"<br />
)<br />
# Add a test referencing data.<br />
ExternalData_Add, Test (MyProjectData<br />
NAME SmoothingTest<br />
COMMAND SmoothingExe DATA{Input/Image.png}<br />
SmoothedImage.png<br />
)<br />
# Add a build target to populate the real data.<br />
ExternalData_Add_Target (MyProjectData)<br />
</syntaxhighlight><br />
<br />
The ExternalData_Add_Test function is a wrapper around CMake's add_test command. The source tree is probed for a Input/Image.png.md5 content link containing the data's MD5 hash. After checking the local object store, a request is made sequentially to each URL in the ExternalData_URL_TEMPLATES list with the data's hash. Once found, a symlink is created in the build tree. The DATA{Input/Image.png} path will expand to the build tree path in the test command line. Data are retrieved when the MyProjectData target is built.<br />
<br />
<br />
===Producing Test Dash boards===<br />
<br />
As your project's testing needs grow, keeping track of the test results can become overwhelming. This is especially true for projects that are tested nightly on a number of different platforms. In these cases, we recommend using a test dashboard to summarize the test results. (see Figure 10.2)<br />
<br />
A test dashboard summarizes the results for many tests on many platforms, and its hyperlinks allow people to drill down into additional levels of detail quickly. The CTest executable includes support for producing test dashboards. When run with the correct options, CTest will produce XML-based output recording the build and test results, and post them to a dashboard server. The dashboard server runs an open source software package called CDash. CDash collects the XML results and produces HTML web pages from them.<br />
<br />
Before discussing how to use CTest to produce a dashboard, let us consider the main parts of a testing dashboard. Each night at a specified time, the dashboard server will open up a new dashboard so each day there is a new web page showing the results of tests for that twenty-four hour period. There are links on the main page that allow you to quickly navigate through different days. Looking at the main page for a project (such as CMake's dashboard off of www.cmake.org), you will see that it is divided into a few main components. Near the top you will find a set of links that allow you to step to previous dashboards, as well as links to project pages such as the bug tracker, documentation, etc.<br />
<br />
<<Figure 10.2: Sample Testing Dashboard>><br />
<br />
Below that, you will find groups of results. Typically groups that you will find include Nightly, Experimental, Continuous, Coverage, and Dynamic Analysis (see Figure 10.3). The category into which a dashboard entry will be placed depends on how it was generated. The simplest are Experimental entries which represent dashboard results for someone's current copy of the project's source code. With an experimental dashboard, the source code is not guaranteed to be up to date. In contrast a Nightly dashboard entry is one where CTest tries to update the source code to a specific date and time. The expectation is that al l nightly dashboard entries for a given day should be based on the same source code.<br />
<br />
<<Figure 10.3: Experimental, Coverage, and Dynamic Analysis Results>><br />
<br />
A continuous dashboard entry is one that is designed to run every time new files are checked in. Depending on how frequently new files are checked in a single day's dashboard could have many continuous entries. Continuous dashboards are particularly helpful for cross platform projects where a problem may only show up on some platforms. In those cases a developer can commit a change that works for them on their platform and then another platform running a continuous build could catch the error, allowing the developer to correct the problem promptly.<br />
<br />
Dynamic Analysis and Coverage dashboards are designed to test the memory safety and code coverage of a project. A Dynamic Analysis dashboard entry is one where all the tests are run with a memory access/leak checking program enabled. Any resulting errors or warnings are parsed, summarized and displayed. This is important to verify that your software is not leaking memory, or reading from uninitialized memory. Coverage dashboard entries are similar in that all the tests are run, but as they are the lines of code being executed are tracked. When all the tests have been run, a listing of how many times each line of code was executed is produced and displayed on the dashboard.<br />
<br />
<br />
====Adding CDash Dashboard Support to a Project====<br />
<br />
In this section we show how to submit results to the CDash dashboard. You can either use the Kitware CDash servers at my.cdash.org or you can setup your own CDash server as described in section 10.11. If you are using my.cdash.org, you can click on the "Start My Project" button which will ask you to create an account (or login if you already have one), and then bring you to a page to start creating your project. If you have installed your own CDash server, then you should login to your CDash server as administrator and select "Create New Project" from the administration panel. Regardless of which approach you use, the next few steps will be to fill in information about your project as shown in Figure 10.4. Many of the items below are optional, so do not be concerned if you do not have a value for them; just leave them empty if they don't apply.<br />
<br />
<<Figure 10.4: Creating a new project in CDash>><br />
<br />
'''Name:''' what you want to call the project.<br />
<br />
'''Description:''' description of the project to be shown on the first page.<br />
<br />
'''Home URL:''' home URL of the project to appear in the main menu of the dashboard.<br />
<br />
'''Bug Tracker URL :''' URL to the bug tracker. Currently CDash supports Mantis<ref>http://www.mantisbt.org/</ref>, and if a bug is entered in the repository with the message "BUG: 132456", CDash will automatically link to the appropriate bug.<br />
<br />
'''Documentation URL:''' URL to where the project's documentation is kept. This will appear in the main menu of the dashboard.<br />
<br />
'''Public Dashboard:''' if checked, the dashboard is public and anybody can see the results of the dash board. If unchecked, only users assigned to this project can access the dashboard.<br />
<br />
'''Logo:''' logo of the project to be displayed on the main dashboard. Optimal size for a logo is 100x100 pixels. Transparent GIFs work best as they can blend in with the CDash background.<br />
<br />
'''Repository Viewer URL:''' URL of the web repository browser. CDash currently supports: ViewCVS, Trac, Fisheye, ViewVC, WebSVN, Loggerhead, GitHub, gitweb, hgweb, and others. Some example URLs are: * http://public.kitware.com/cgi-bin/viewcvs.cgi/?cvsroot=CMake (for ViewVC) * https://www.kitware.com/websvn/listing.php?repname=MyRepository (for WebSVN)<br />
<br />
'''Repositories:''' in order to display the daily updates, CDash gets a diff version of the modified files. Current CDash supports only anonymous repository access. A typical URL is :pserver:anoncvs@myproject.org:/cvsroot/MyProject.<br />
<br />
'''Nightly Start Time:''' CDash displays the current dashboard using a 24 hour window. The nightly start time defines the beginning of this window. Note that the start time is expressed in the form HH:MM:SS TZ, e.g. 01:00:00 UTC. It is recommended to express the nightly start time in UTC to keep operations runnjng smoothly across the boundaries of local time changes, like moving to or from daylight saving time.<br />
<br />
'''Coverage Threshold:''' CDash marks that coverage has passed (green) if the global coverage for a build or specific files is above this threshold. It is recommended to set the coverage threshold to a high value and decrease it as you focus on improving your coverage. Enable Test Timing: enable/disable test timing for this project. See "Test timing" in the next section for more information.<br />
<br />
'''Test Time Standard Deviation:''' set a multiplier for the standard deviation of a test time. If the time for a test is higher than the mean + multiplier * standard deviation, the test time status is marked as failed. The default value is 4 if not specified. Note that changing this value does not affect previous builds; only builds submitted after the modification .<br />
<br />
'''Test Time Standard Deviation Threshold:''' set a minimum standard deviation for a test time. If the current standard deviation for a test is lower than this threshold, then the threshold is used instead. This is particularly important for tests that have a very low standard deviation, but still some variability. The default threshold is set to 2 if not specified. Note that changing this value does not affect previous builds, only builds submitted after the modification.<br />
<br />
'''Test Time # Max Failures Before Flag:''' some tests might take longer from one day to another depending on the client machine load. This variable defines the number of times a test should fail because of timing issues before being flagged.<br />
<br />
'''Email Submission Failures:''' enable/disable sending email when a build fails (configure, error, warnings,<br />
update, and test failings) for this project. This is a general feature.<br />
<br />
<br />
'''Email Redundant Failure:''' by default CDash does not send email for the same failures. For instance, if a build continues to fail over time, only one email would be sent. If the email redundant failures is checked, then CDash will send an email every time a build has a failure. CDash.<br />
<br />
'''Email Build Missing:''' enable/disable sending email when a build has not submitted. Email Low Coverage: enable/disable sending email when the coverage for files is lower than the default threshold value specified above.<br />
<br />
'''Email Test Timing Changed:''' enable/disable sending email when a test's timing has changed. Maximum Number of ltems in Email: dictates how many failures should be sent in an email. Maximum Number of Characters in Email: dictates how many characters from the log should be sent in the email.<br />
<br />
'''Google Analytics Tracker:''' CDash supports visitor tracking through Google analytics. See "Adding Google Analytics" for more information.<br />
<br />
'''Show Site IP Addresses:''' enable/disable the display of IP addresses of the sites submitting to this project. Display Labels: as of CDash 1.4, and CTest 2.8, labels can be attached to various build and test results. If checked, these labels are displayed on applicable CDash pages.<br />
<br />
'''AutoRemove Timeframe:''' set the number of days to retain results for this project. If the timeframe is less than 2 days, CDash will not remove any builds. AutoRemove Max Builds: set the maximum number of builds to remove when performing the auto removal of builds.<br />
<br />
<br />
After providing this information, you can click on "Create Project" to create the project in CDash. At this point the server is ready to accept dashboard submissions. The next step is to provide the dashboard server information to your software project. This information is kept in a file named CTestConfg.cmake at the top level of your source tree. You can download this file by clicking on the "Edit Project" button for your dashboard (it looks like a pie chart with a wrench underneath it), then clicking on the miscellaneous tab and selecting "Download CTestConfig", and then saving the CTestConfig.cmake in your source tree. In the next section, we review this file in more detail .<br />
<br />
<br />
====Client Setup====<br />
<br />
To support dashboards in your project you need to include the CTest module as follows.<br />
<br />
<syntaxhighlight lang="text"><br />
# Include CDash dashboard testing module<br />
include (CTest)<br />
</syntaxhighlight><br />
<br />
The CTest module will then read settings from the CTestConfig.cmake file you downloaded from CDash. If you have added add_test() (page 277) command calls to your project creating a dashboard entry is as simple as running:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -D Experimental<br />
</syntaxhighlight><br />
<br />
The -D option tells CTest to create a dashboard entry. The next argument indicates what type of dashboard entry to create. Creating a dashboard entry involves quite a few steps that can be run independently, or as one command. In this example, the Experimental argument will cause CTest to perform a number of different steps as one command. The different steps of creating a dashboard entry are summarized below.<br />
<br />
'''Start''' Prepare a new dashboard entry. This creates a Testing subdirectory in the build directory. The Testing subdirectory will contain a subdirectory for the dashboard results with a name that corresponds to the dashboard time. The Testing subdirectory will also contain a subdirectory for the temporary testing results called Temporary.<br />
<br />
'''Update''' Perform a source control update of the source code (typically used for nightly or continuous runs). Currently CTest supports Concurrent Versions System (CVS), Subversion, Git, Mercurial, and Bazaar.<br />
<br />
'''Configure''' Run CMake on the project to make sure the Makefiles or project files are up to date.<br />
<br />
'''Build''' Build the software using the specified generator.<br />
<br />
'''Test''' Run all the tests and record the results.<br />
<br />
'''MemoryCheck''' Perform memory checks using Purify or valgrind.<br />
<br />
'''Coverage''' Collect source code coverage information using gcov or Bullseye.<br />
<br />
'''Submit''' Submit the testing results as a dashboard entry to the server.<br />
<br />
Each of these steps can be run independently for a Nightly or Experimental entry using the following syntax:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -D NightlyStart<br />
ctest -D NightlyBuild<br />
ctest -D NightlyCoverge -D NightlySubmit<br />
</syntaxhighlight><br />
<br />
or<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -D ExperimentalStart<br />
ctest -D ExperimentalConfigure<br />
ctest -D ExperimentalCoverge -D Experimentalsubmit<br />
</syntaxhighlight><br />
<br />
Alternatively, you can use shortcuts that perform the most common combinations all at once. The shortcuts that CTest has defined include:<br />
<br />
'''ctest -D Experimental''' performs the start, configure, build, test, coverage, and submit commands.<br />
<br />
'''ctest -D Nightly''' performs the start, update, configure, build, test, coverage, and submit commands.<br />
<br />
'''ctest -D Continuous''' performs the start, update, configure, build, test, coverage, and submit commands.<br />
<br />
'''ctest -D MemoryCheck''' performs the start, configure, build, memorycheck, coverage, and submit commands.<br />
<br />
When first setting up a dashboard it is often useful to combine the -D option with the -V option. This will allow you to see the output of all the different stages of the dashboard process. Likewise, CTest maintains log files in the Testing/Temporary directory it creates in your binary tree. There you will find log files for the most recent dashboard run. The dashboard results (XML files) are stored in the Testing directory as well.<br />
<br />
<br />
===Customizing Dash boards for a Project===<br />
<br />
CTest has a few options that can be used to control how it processes a project. If, when CTest runs a dash board, it finds CTestCustom.ctest files in the binary tree, it will load these files and use the settings from them to control its behavior. The syntax of a CTestCustom file is the same as regular CMake syntax. That said, only set commands are normally used in this file. These commands specify properties that CTest will consider when performing the testing.<br />
<br />
<br />
====Dashboard Submissions Settings====<br />
<br />
A number of the basic dashboard settings are provided in the file that you download from CDash. You can edit these initial values and provide additional values if you wish. The first value that is set is the nightly start time. This is the time that dashboards all around the world will use for checking out their copy of the nightly source code. This time also controls how dashboard submissions will be grouped together. All submissions from the nightly start time until the next nightly start time will be included on the same "day".<br />
<br />
<syntaxhighlight lang="text"><br />
# Dashboard is opened for submissions for a 24 hour period<br />
# starting at the specified NIGHTLY_START_TIME. Time is<br />
# specified in 24 hour format.<br />
set (CTEST_NIGHTLY_START_TIME "01:00:00 UTC")<br />
</syntaxhighlight><br />
<br />
The next group of settings control where to submit the testing results. This is the location of the CDash server.<br />
<br />
<syntaxhighlight lang="text"><br />
# CDash server to submit results (used by client)<br />
set (CTEST_DROP_METHOD http)<br />
set (CTEST_DROP_SITE "my.cdash.org")<br />
set (CTEST_DROP_LOCATION "/submit .php?project=KensTest")<br />
set (CTEST_DROP_SITE_CDASH TRUE)<br />
</syntaxhighlight><br />
<br />
The CTEST_DROP_SITE (page 678) specifies the location of the CDash server. Build and test results generated by CDash clients are sent to this location. The CTEST_DROP_LOCATION (page 678) is the directory or the HTTP URL on the server where CDash clients leave their build and test reports. The CTEST_DROP_SITE_CDASH (page 678) specifies that the current server is CDash, which prevents CTest from trying to "trigger" the submission (this is still done if this variable is not set to allow for backwards compatibility with Dart and Dart 2).<br />
<br />
Currently CDash supports only the HTTP drop submission method; however CTest supports other submission types. The CTEST_DROP_METHOD (page 678) specifies the method used to submit testing results. The most common setting for this will be HTTP which uses the Hyper Text Transfer Protocol (HTTP) to transfer the test data to the server. Other drop methods are supported for special cases such as FTP and SCP. In the example below, clients that are submitting their results using the HTTP protocol use a web address as their drop site. If the submission is via FTP, this location is relative to where the CTEST_DROP_SITE_USER (page 678) will log in b y default. The CTEST_DROP_SITE_USER specifies the FTP username the client will use on the server. For FTP submissions this user will typically be "anonymous". However, any username that can communicate with the server can be used. For FTP servers that require a password, it can be stored in the CTEST_DROP_SITE_PASSWORD (page 678) variable. The CTEST_DROP_SITE_MODE (not used in this example) is an optional variable that you can use to specify the FTP mode. Most FTP servers will handle the default passive mode, but you can set the mode explicitly to active if your server does not.<br />
<br />
CTest can also be run from behind a firewall. If the firewall allows FTP or HITP traffic, then no additional settings are required. If the firewall requires an FTP/HITP proxy or uses a SOCKS4 or SOCKS5 type proxy, some environment variables need to be set. HTTP_PROXY and FTP_PROXY specify the servers that service HTTP and FTP proxy requests. HTTP_PROXY_PORT and FTP_PROXY_PORT specify the port on which the HTTP and FTP proxies reside. HTTP_PROXY_TYPE specifies the type of the HITP proxy used. The three different types of proxies supported are the default, which incl udes a generic HTTP/FTP proxy, "SOCKS4", and "SOCKS5", which specify SOCKS4 and SOCKS5 compatible proxies.<br />
<br />
<br />
====Filtering Errors and Warnings====<br />
<br />
By default, CTest has a list of regular expressions that it matches for finding the errors and warnings from the output of the build process. You can override these settings in your CTestCustom.ctest files using several variables as shown below.<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_CUSTOM_WARNING_MATCH<br />
${CTEST_CUSTOM_WARNING_ MATCH}<br />
"{standard input}:[0-9][0-9]*: Warning: "<br />
)<br />
<br />
set (CTEST_CUSTOM_WARNING_EXCEPTION<br />
${CTEST_CUSTOM_WARNING_EXCEPTION}<br />
"tk8.4.5/[^/]+/[^/]+.c[:\"]"<br />
"xtree.[0-9]+. : warning C4702: unreachable code"<br />
"warning LNK4221"<br />
"variable .var_args[2]*. is used before its value is set"<br />
"jobserver unavailable"<br />
)<br />
</syntaxhighlight><br />
<br />
Another useful feature of the CTestCustom files is that you can use it to limit the tests that are run for memory checking dashboards. Memory checking using purify or valgrind is a CPU intensive process that can take twenty hours for a dashboard that normally takes one hour. To help alleviate this problem, CTest allows you to exclude some of the tests from the memory checking process as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_CUSTOM_MEMCHECK_IGNORE<br />
${CTEST_CUSTOM_MEMCHECK_IGNORE}<br />
TestSetGet<br />
otherPrint-ParaView<br />
Example-vtkLocal<br />
Example-vtkMy<br />
)<br />
</syntaxhighlight><br />
<br />
The format for excluding tests is simply a list of test names as specified when the tests were added in your CMakeLists file with add_test() (page 277).<br />
<br />
In addition to the demonstrated settings, such as CTEST_CUSTOM_WARNING_MATCH, CTEST_CUSTOM_WARNING_EXCEPTION, and CTEST_CUSTOM_MEMCHECK_IGNORE, CTest also checks several other variables.<br />
<br />
'''CTEST_CUSTOM_ERROR_MATCH''' Additional regular expressions to consider a build line as an error line<br />
<br />
'''CTEST_CUSTOM_ERROR_EXCEPTION''' Additional regular expressions to consider a build line not as an error line<br />
<br />
'''CTEST_CUSTOM_WARNING_MATCH''' Additional regular expressions to consider a build line as a warning line<br />
<br />
'''CTEST_CUSTOM_WARNING_EXCEPTION''' Additional regular expressions to consider a build line not as a warning line<br />
<br />
'''CTEST_CUSTOM_MAXIMUM_NUMBER_OF_ERRORS''' Maximum number of errors before CTest stops reporting errors (default 50)<br />
<br />
'''CTEST_CUSTOM_MAXIMUM_NUMBER_OF_WARNINGS''' Maximum number of warnings before CTest stops reporting warnings (default 50)<br />
<br />
'''CTEST_CUSTOM_COVERAGE_EXCLUDE''' Regular expressions for files to be excluded from the coverage analysis<br />
<br />
'''CTEST_CUSTOM_PRE_MEMCHECK''' List of commands to execute before performing memory checking<br />
<br />
'''CTEST_CUSTOM_POST_MEMCHECK''' List of commands to execute after performing memory checking<br />
<br />
'''CTEST_CUSTOM_MEMCHECK_IGNORE''' List of tests to exclude from the memory checking step<br />
<br />
'''CTEST_CUSTOM_PRE_TEST''' List of commands to execute before performing testing<br />
<br />
'''CTEST_CUSTOM_POST_TEST''' List of commands to execute after performing testing<br />
<br />
'''CTEST_CUSTOM_TESTS_IGNORE''' List of tests to exclude from the testing step<br />
<br />
'''CTEST_CUSTOM_MAXIMUM_PASSED_TEST_OUTPUT_SIZE''' Maximum size of test output for the passed test (default 1k)<br />
<br />
'''CTEST_CUSTOM_MAXIMUM_FAILED_TEST_OUTPUT_SIZE''' Maximum size of test output for the failed test (default 300k)<br />
<br />
Commands specified in CTEST_CUSTOM_PRE_TEST and CTEST_CUSTOM_POST_TEST, as well as the<br />
equivalent memory checki ng ones, are executed once per CTest run. These commands can be used, for<br />
example, if al l tests require some initial setup and some final cleanup to be performed.<br />
<br />
<br />
====Adding Notes to a Dash board====<br />
<br />
CTest and CDash support adding note files to a dashboard submission. These will appear on the dashboard as a clickable icon that links to the text of all the files. To add notes, call CTest with the -A option followed by a semicolon-separated list of filenames. The contents of these files will be submitted as notes for the dashboard. For example:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -D Continous -A C:/MyNotes.txt;C:/OtherNotes.txt<br />
</syntaxhighlight><br />
<br />
Another way to submit notes with a dashboard is to copy or write the notes as files into a Notes directory under the Testing directory of your binary tree. Any files found there when CTest submits a dashboard will also be uploaded as notes.<br />
<br />
<br />
===Setting up Automated Dashboard Clients===<br />
<br />
'''IMPORTANT:''' This section is obsolete, and left in only for reference. To setup new dashboards, please skip ahead to the next section, and write an "advanced ctest script" instead of following the directions in this section.<br />
<br />
CTest has a built-in scripting mode to help make the process of setting up dashboard clients even easier. CTest scripts will handle most of the common tasks and options that CTest -D Nightly does not. The dashboard script is written using CMake syntax and mainly involves setting up different variables or options, or creating an elaborate procedure, depending on the complexity of testing. Once you have written the script you can run the nightly dashboard as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -S myScript.cmake<br />
</syntaxhighlight><br />
<br />
First we will consider the most basic script you can use, and then we will cover the different options you can make use of. There are four variables that you must always set in your scripts. The first two variables are the names of the source and binary directories on disk, CTEST_SOURCE_DIRECTORY (page 680) and CTEST_BINARY_DIRECTORY (page 675). These should be fully specified paths. The next variable, CTEST_COMMAND, specifies which CTest command to use for running the dashboard. This may seem a bit confusing at first. The -S option of CTest is provided to do all the setup and customization for a dashboard, but the actual running of the dashboard is done with another invocation of CTest -D. Basically once the CTest script has done what it needs to do to setup the dashboard, it invokes CTest -D to actually generate the results. You can adjust the value of CTEST_COMMAND to control what type of dashboard to generate (Nightly, Experimental, Continuous), as well as to pass other options to the internal CTest process such as -I,,7 to run every 7th test. To refer to the CTest that is running the script, use the variable: CTEST_EXECUTABLE_NAME. The last required variable is CTEST_CMAKE_COMMAND, which specifies the full path to the cmake executable that will be used to configure the dashboard. To refer to the CMake command that corresponds to the CTest command running the script, use the variable: CMAKE_EXECUTABLE_NAME. The CTest script does an initial configuration with cmake in order to generate the CTestConfig. cmake file that CTest will use for the dashboard. The fol lowing example demonstrates the use of these four variables and is an example of the simplest script you can have.<br />
<br />
<syntaxhighlight lang="text"><br />
# these are the source and binary directories on disk<br />
set (CTEST_SOURCE_DIRECTORY C:/martink/test/CMake)<br />
set (CTEST_BINARY_DIRECTORY C:/martink/test/CMakeBin)<br />
<br />
# which CTest command to use for running the dashboard<br />
set (CTEST COMMAND<br />
"\"${CTEST_EXECUTABLE_NAME}\" -D Nightly"<br />
)<br />
<br />
# what CMake command to use for configuring this dashboard<br />
set (CTEST_CMAKE COMMAND<br />
"\"${CMAKE_EXECUTABLE_NAME}\""<br />
)<br />
</syntaxhighlight><br />
<br />
The script above is not that different to running CTest -D from the command line yourself. All it adds is that it verifies that the binary directory exists and creates it if it does not. Where CTest scripting really shines is in the optional features it supports. We will consider these options one by one, starting with one of the most commonly used options CTEST_START_WITH_EMPTY_BINARY_DIRECTORY. When this variable is set to true, it will delete the binary directory and then recreate it as an empty directory prior to running the dashboard. This guarantees that you are testing a clean build every time the dashboard is run. To use this option you simply set it in your script. In the example above we would simply add the following lines:<br />
<br />
<syntaxhighlight lang="text"><br />
# should CTest wipe the binary tree before running<br />
set (CTEST_START_WITH_EMPTY_BINARY_DIRECTORY TRUE)<br />
</syntaxhighlight><br />
<br />
Another commonly used option is the CTEST_INITIAL_CACHE variable. Whatever values you set this to will be written into the CMakeCache file prior to running the dashboard. This is an effective and simple way to initialize a cache with some preset values. The syntax is the same as what is in the cache with the exception that you must escape any quotes. Consider the following example:<br />
<br />
<syntaxhighlight lang="text"><br />
# this is the initial cache to use for the binary tree, be<br />
# careful to escape any quotes inside of this string<br />
set (CTEST_INITIAL_CACHE "<br />
<br />
//Command used to build entire project from the command line.<br />
MAKECOMMAND:STRING=\"devenv.com\" CMake.sln /build Debug /project ALL_BUILD<br />
<br />
//make program<br />
CMAKE_MAKE_PROGRAM:FILEPATH=C:/PROGRA~1/MICROS~1.NET/Common7/IDE/devenv.com<br />
<br />
//Name of generator.<br />
CMAKE_GENERATOR:INTERNAL=Visual Studio 7 .NET 2003<br />
<br />
//Path to a program.<br />
CVSCOMMAND:FILEPATH=C:/cygwin/bin/cvs.exe<br />
<br />
//Name of the build<br />
BUILDNAME:STRING=Win32-vs71<br />
<br />
//Name of the computer/site where compile is being run<br />
SITE:STRING=DASH1.kitware<br />
<br />
")<br />
</syntaxhighlight><br />
<br />
Note that the above code is basically just one set() (page 330) command setting the value of CTEST_INITIAL_CACHE to a multiline string value. For Windows builds, these are the most common cache entries that need to be set prior to running the dashboard. The first three values control what compiler will be used to build this dashboard (Visual Studio 7.1 in this example). CVSCOMMAND might be found automatically, but if not it can be set here. The last two cache entries are the names that will be used to identify this dashboard submission on the dashboard.<br />
<br />
The next two variables work together to support additional directories and projects. For example, imagine that you had a separate data directory that you needed to keep up-to-date with your source directory. Setting the variables CTEST_CVS_COMMAND (page 677) and CTEST_EXTRA_UPDATES_1 tells CTest to perform a cvs update on the specified directory, with the specified arguments prior to running the dashboard. For example:<br />
<br />
<syntaxhighlight lang="text"><br />
# what cvs command to use for configuring this dashboard<br />
set (CTEST_CVS_COMMAND "C:/cygwin/bin/cvs.exe"<br />
<br />
# set any extra directories to do an update on<br />
set (CTEST_EXTRA_UPDATES_1<br />
"C:/Dashboards/My Tests/VTKData" "-dAP")<br />
</syntaxhighlight><br />
<br />
If you have more than one directory that needs to be updated you can use CTEST_EXTRA_UPDATES_2 through CTEST_EXTRA_UPDATES_9 in the same manner. The next variable you can set is called CTEST_ENVIRONMENT. This variable consolidates several set commands into a single command. Setting this variable allows you to set environment variables that will be used by the process running the dashboards. You can set as many environment variables as you want using the syntax shown below.<br />
<br />
<syntaxhighlight lang="text"><br />
# set any extra environment variables here<br />
set (CTEST_ENVIRONMENT<br />
*DISPLAY=:0"<br />
"USE_GCC_MALLOC=1"<br />
)<br />
# is the same as<br />
set (ENV{DISPLAY} 7:0")<br />
set (ENV{USE_GCC_MALLOC} "1")<br />
</syntaxhighlight><br />
<br />
The final general purpose option we will discuss is CTest's support for restoring a bad dashboard. In some cases, you might want to make sure that you always have a working build of the software. In other instances, you might use the resulting executables or libraries from one dashboard in the build process of another dashboard. If the first dashboard fails in either of these situations, it is best to drop back to the last previously working dashboard. You can do this i n CTest by setting CTEST_BACKUP_AND_RESTORE to true. When this is set to true, CTest will first backup the source and binary directories. It will then check out a new source directory and create a new binary directory. After that, it will run a full dashboard. If the dashboard is successful the backup directories are removed, if for some reason the new dashboard fails the new directories will be removed and the old directories restored. To make this work, you must also set the CTEST_CVS_CHECKOUT (page 677) variable. This should be set to the command required to check out your source tree. This doesn't actually have to be cvs, but it must result in a source tree in the correct location. Consider the following example:<br />
<br />
<syntaxhighlight lang="text"><br />
# do a backup and should the build fail restore,<br />
# if this is true you must set the CTEST_CVS_CHECKOUT<br />
# variable below.<br />
set (CTEST_BACKUP_AND_RESTORE TRUE)<br />
<br />
# this is the full cvs command to checkout the source dir<br />
# this will be run from the directory above the source dir<br />
set (CTEST_CVS_CHECKOUT<br />
"/usr/bin/cvs -d /cvsroot/FOO co -d FOO FOO"<br />
)<br />
</syntaxhighlight><br />
<br />
Note that whatever checkout command you specify will be run from the directory above the source directory. A typical nightly dashboard client script will look like this:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_SOURCE_NAME CMake)<br />
set (CTEST_BINARY_NAME CMake-gcc)<br />
set (CTEST_DASHBOARD_ROOT "$ENV{HOME}/Dashboards/My Tests")<br />
<br />
set (CTEST_SOURCE_DIRECTORY<br />
"${CTEST_DASHBOARD_ROOT}/${CTEST_SOURCE_NAME}")<br />
set (CTEST_BINARY_DIRECTORY<br />
"${CTEST_DASHBOARD_ROOT}/${CTEST_BINARY_NAME}")<br />
<br />
# which ctest command to use for running the dashboard<br />
set (CTEST_COMMAND<br />
"\"S (CTEST_EXECUTABLE_NAME} \"<br />
-D Nightly<br />
-A \"${CTEST_SCRIPT_DIRECTORY}/${CTEST_SCRIPT_NAME}\"")<br />
<br />
# what CMake command to use for configuring this dashboard<br />
set (CTEST_CMAKE_COMMAND "\"${CMAKE_EXECUTABLE_NAME}\"")<br />
<br />
# should ctest wipe the binary tree before running<br />
set (CTEST_START_WITH_EMPTY_BINARY_DIRECTORY TRUE)<br />
# this is the initial cache to use for the binary tree<br />
set (CTEST_INITIAL_CACHE *<br />
SITE: STRING=midworld.kitware<br />
BUILDNAME:STRING=DarwinG5-g++<br />
MAKECOMMAND:STRING=make -i -j2<br />
")<br />
<br />
# set any extra environment variables here<br />
set (CTEST_ENVIRONMENT<br />
"CC=gcc"<br />
"CXX=g++"<br />
)<br />
</syntaxhighlight><br />
<br />
<br />
====Settings for Continuous Dash boards====<br />
<br />
The next three variables are used for setting up continuous dashboards. As mentioned earlier a continuous<br />
dashboard is designed to run continuously throughout the day, providing quick feedback on the state of the<br />
software. If you are doing a continuous dashboard you can use CTEST_CONTINUOUS_DURATION and<br />
CTEST_CONTINUOUS_MINIMUM_INTERVAL to run the continuous repeatedly. The duration controls<br />
how long the script should run continuous dashboards, and the minimum interval specifies the shortest al<br />
lowed time between continuous dashboards. For example, say that you want to run a continuous dashboard<br />
from 9AM until 7PM and that you want no more than one dashboard every twenty minutes. To do thi s you<br />
would set the duration to 600 minutes (ten hours) and the minimum interval to 20 m inutes. If you run the<br />
test script at 9AM it w i l l start a continuous dashboard. When that dashboard finishes it w i l l check to see<br />
how much time has elapsed. If less than 20 minutes has elapsed CTest will sleep until the 20 m i nutes are up.<br />
If 20 or more minutes have elapsed then it w i l l immediately start another continuous dashboard. Do not be<br />
concerned that you w i l l end up with 30 dashboards a day (IO hours* three times an hour) . If there have been<br />
no changes to the source code, CTest will not build and submit a dashboard. It w i l l i nstead start waiting until<br />
the next interval is up and then check again. Using this feature just involves setting the following variables to<br />
the values you desire.<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_CONTINUOUS_DURATION 600)<br />
set (CTEST_CONTINUOUS_MINIMUM_INTERVAL 20)<br />
</syntaxhighlight><br />
<br />
Earlier, we introduced the CTEST_START_WITH_EMPTY_BINARY_DIRECTORY variable that can be set to start the dashboards with an empty binary directory. If this is set to true for a continuous dashboard then every continuous where there has been a change in the source code will result in a complete build from scratch. For larger projects this can significantly limit the number of continuous dashboards that can be generated in a day, while not using it can result in build errors or omissions because it is not a clean build. Fortunately there is a compromise: if you set CTEST_START_WITH_EMPTY_BINARY_DIRECTORY_ONCE to true, CTest will start with a clean binary directory for the first continuous build but not subsequent ones. Based on your settings for the duration this is an easy way to start with a clean build every morn ing, but use existing builds for the rest of the day.<br />
<br />
Another helpful feature to use with a continuous dashboard is the -I option. A large project may have so many tests that running all the tests limits how frequently a continuous dashboard can be generated. By adding -I,,7 (or -I,,5 etc) to the CTEST_COMMAND value, the continuous dashboard will only run every seventh test, significantly reducing the time required between continuous dashboards. For example:<br />
<br />
<syntaxhighlight lang="text"><br />
# these are the names of the source and binary directories<br />
set (CTEST_SOURCE_NAME CMake-cont)<br />
set (CTEST_BINARY_NAME CMakeBCC-cont)<br />
set (CTEST_DASHBOARD_ROOT "c:/Dashboards/My Tests")<br />
set (CTEST_SOURCE_DIRECTORY<br />
"${CTEST_DASHBOARD_ROOT}/${CTEST_SOURCE_NAME}")<br />
set (CTEST_BINARY_DIRECTORY<br />
"${CTEST_DASHBOARD_ROOT}/${CTEST_BINARY_NAME}")<br />
<br />
# which ctest command to use for running the dashboard<br />
set (CTEST_COMMAND<br />
"\"${CTEST_EXECUTABLE_NAME} \"<br />
-D Continuous<br />
-A \"${CTEST_SCRIPT_DIRECTORY}/${CTEST_SCRIPT_NAME}\"")<br />
<br />
# what CMake command to use for configuring this dashboard<br />
set (CTEST_CMAKE_COMMAND "\"${CMAKE_EXECUTABLE_NAME}\"")<br />
<br />
# this is the initial cache to use for the binary tree<br />
set (CTEST_INITIAL_CACHE "<br />
SITE:STRING=dash14.kitware<br />
BUILDNAME:STRING=Win32-bcc5.6<br />
CMAKE_GENERATOR:INTERNAL=Borland Makefiles<br />
CVSCOMMAND:FILEPATH=C:/Program Files/TortoiseCVS/cvs.exe<br />
CMAKE_CXX_FLAGS:STRING=-w- -whid -waus -wpar -tWM<br />
CMAKE_C_FLAGS:STRING=-w- -whid -waus -tWM<br />
")<br />
<br />
# set any extra environment variables here<br />
set (ENV{PATH} "C:/Program Files/Borland/CBuilder6/Bin\;<br />
C:/Program Files/Borland/CBuilder6/Projects/Bpl"<br />
)<br />
</syntaxhighlight><br />
<br />
<br />
====Variables Ava i lable i n CTest Scri pts====<br />
<br />
There are a few variables that will be set before your script executes. The first two variables are the directory the script is in, CTEST_SCRIPT_DIRECTORY, and name of the script itself CTEST_SCRIPT_NAME. These two variables can be used to make your scripts more portable. For example, if you wanted to include the script itself as a note for the dashboard you could do the following:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_COMMAND<br />
"\"${CTEST_EXECUTABLE_NAME}\" -D Continuous<br />
-A \"${CTEST_SCRIPT_DIRECTORY}/${CTEST_SCRIPT_NAME}\""<br />
)<br />
</syntaxhighlight><br />
<br />
Another variable you can use is CTEST_SCRIPT_ARG. This variable can be set by providing a comma separated argument after the script name when invoking CTest -S. For example CTest -s foo.cmake, 21 would result in CTEST_SCRIPT_ARG being set to 21.<br />
<br />
<br />
====Limitations of Traditional CTest Scripting====<br />
<br />
The traditional CTest scripting described in this section has some limitations. The first is that the dashboard will always fail if the Configure step fails. The reason for that is that the input files for CTest are actually generated by the Configure step. To make things worse, the update step will not happen and the dashboard will be stuck. To prevent this, an additional update step is necessary. This can be ach ieved by adding CTEST_EXTRA_UPDATES_1 variable with "-D yesterday" or similar flag. This will update the repository prior to doing a dashboard. Since it will update to yesterday's time stamp, the actual update step of CTest will find the files that were modified since the previous day.<br />
<br />
The second limitation of traditional CTest scripting is that it is not actually scripting. We only have control over what happens before the actual CTest run, but not what happens during or after. For example, if we want to run the testing and then move the binaries somewhere, or if we want to build the project, do some extra tasks and then run tests or something similar, we need to perform several complicated tasks, such as run CMake with -P option as a part of CTEST_COMMAND.<br />
<br />
<br />
===Advanced CTest Scripting===<br />
<br />
The CTest scripting described in the previous section is still valid and will still work. This section describes how to write command-based CTest scripts that allow the maintainer to have much more fine-grained control over the individual steps of a dashboard.<br />
<br />
<br />
====Extended CTest Scripting====<br />
<br />
To overcome the limitations of traditional CTest scripting, CTest provides an extended scripting mode. In this mode, the dashboard maintainer has access to individual CTest command functions, such as ctest_configure and ctest_build. By running these functions individually, the user can flexibly develop custom testing schemes. Here's an example of an extended CTest script<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required (VERSION 2.2)<br />
<br />
set (CTEST_SITE "andoria.kitware")<br />
set (CTEST_BUILD_NAME "Linux-g++")<br />
set (CTEST_NOTES FILES<br />
"$(CTEST_SCRIPT_DIRECTORY}/${CTEST_SCRIPT_NAME}")<br />
<br />
set (CTEST_DASHBOARD_ROOT "$ENV{HOME}/Dashboards/My Tests")<br />
set (CTEST_SOURCE_DIRECTORY "${CTEST_DASHBOARD_ROOT}/CMake")<br />
set (CTEST_BINARY_DIRECTORY "${CTEST_DASHBOARD_ROOT}/CMake-gcc")<br />
<br />
set (CTEST_UPDATE_COMMAND "/usr/bin/cvs")<br />
set (CTEST_CONFIGURE_COMMAND<br />
"\"$({CTEST_SOURCE_DIRECTORY}/bootstrap\"")<br />
set (CTEST_BUILD_COMMAND "/usr/bin/make -j 2")<br />
<br />
ctest_empty_binary_directory (${CTEST_BINARY_DIRECTORY})<br />
<br />
ctest_start (Nightly)<br />
ctest_update (SOURCE "${CTEST_SOURCE_DIRECTORY}")<br />
ctest_configure (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_build (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_test (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_submit ()<br />
</syntaxhighlight><br />
<br />
The first line is there to make sure an appropriate version of CTest is used. The advanced scripting was introduced in CTest 2.2. The CMake parser is used, and so all scriptable commands from CMake are available. This includes the CMake_minimum_required command:<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required (VERSION 2.2)<br />
</syntaxhighlight><br />
<br />
Overall, the layout of the rest of this script is similar to a traditional one. There are several settings that CTest will use to perform its tasks. Then, unlike with traditional CTest, there are the actual tasks that CTest will perform. Instead of providing information in the project's CMake cache, in this scripting mode all the information is provided to CTest. For compatibility reasons we may choose to write the information to the cache, but that is up to the dashboard maintainer. The first block contains the variables about the submission.<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_SITE "andoria.kitware")<br />
set (CTEST_BUILD NAME *Linux-g++")<br />
set (CTEST_NOTES_FILES<br />
"${CTEST_SCRIPT_DIRECTORY}/${CTEST_SCRIPT_NAME}")<br />
</syntaxhighlight><br />
<br />
These variables serve the same role as the SITE and BUILD_NAME cache variables. They are used to identify the system once it submits the results to the dashboard. CTEST_NOTES_FILES is a list of files that should be submitted as the notes of the dashboard submission. This variable corresponds to the -A flag of CTest.<br />
<br />
The second block describes the information that CTest functions will use to perform the tasks:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_DASHBOARD_ROOT "$ENV{HOME}/Dashboards/My Tests")<br />
set (CTEST_SOURCE_DIRECTORY "${CTEST_DASHBOARD_ROOT}/CMake")<br />
set (CTEST_BINARY_DIRECTORY "${CTEST_DASHBOARD_ROOT}/CMake-gcc ")<br />
set (CTEST_UPDATE_COMMAND "/usr/bin/cvs")<br />
set (CTEST_CONFIGURE_COMMAND<br />
"\"${CTEST_SOURCE_DIRECTORY}/bootstrap\"")<br />
set (CTEST_BUILD_COMMAND "/usr/bin/make -j 2")<br />
</syntaxhighlight><br />
<br />
The CTEST_SOURCE_DIRECTORY and CTEST_BINARY_DIRECTORY serve the same purpose as in the traditional CTest script. The only difference is that we will be able to override these variables later on when calling the CTest functions, if necessary. The CTEST_UPDATE_COMMAND is the path to the command used to update the source directory from the repository. Currently CTest supports Concurrent Versions System (CVS), Subversion, Git, Mercurial, and Bazaar.<br />
<br />
Both the configure and build handlers support two modes. One mode is to provide the full command that will be invoked during that stage. This is designed to support projects that do not use CMake as their configuration or build tool . In this case, you specify the full command lines to configure and build your project by setting the CTEST_CONFIGURE_COMMAND and CTEST_BUILD_COMMAND variables respectively. This is similar to specifying CTEST_CMAKE_COMMAND in the traditional CTest scripting.<br />
<br />
For projects that use CMake for their configuration and build steps you do not need to specify the command<br />
lines for configuring and building your project. Instead, you will specify the CMake generator to use by setting the CTE ST_CMAKE_GENE RATOR variable. This way CMake will be run with the appropriate generator.<br />
One example of this is:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_CMAKE_GENERATOR "Visual Studio 8 2005")<br />
</syntaxhighlight><br />
<br />
For the build step you should also set the variables CTEST_PROJECT_NAME and CTEST_BUILD_CONFIGURATION, to specify how to build the project. In this case CTEST_PROJECT_NAME will match the top level CMakeLists file's PROJECT command, and therefore also match the name of the generated Visual Studio *.sln file. The CTEST_BUILD_CONFIGURATION should be one of Release, Debug, MinSizeRel, or RelWithDeblnfo. Additionally, CTEST_BUILD_FLAGS can be provided as a hint to the build command. An example of testing for a CMake based project would be:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_CMAKE GENERATOR "Visual Studio 8 2005")<br />
set (CTEST_PROJECT_NAME "Grommit")<br />
set (CTEST_BUILD_CONFIGURATION "Debug")<br />
</syntaxhighlight><br />
<br />
The final block performs the actual testing and submission:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest_empty_binary directory (${CTEST_BINARY_DIRECTORY})<br />
<br />
ctest_start (Nightly)<br />
<br />
ctest_update (SOURCE<br />
"${CTEST_SOURCE_DIRECTORY}" RETURN_VALUE res)<br />
ctest_configure (BUILD<br />
"${CTEST_BINARY_DIRECTORY}" RETURN_VALUE res)<br />
ctest_build (BUILD "${CTEST_BINARY_DIRECTORY}" RETURN_VALUE res)<br />
ctest_test (BUILD "${CTEST_BINARY_DIRECTORY}" RETURN_VALUE res)<br />
ctest_submit (RETURN_VALUE res)<br />
</syntaxhighlight><br />
<br />
The ctest_empty_binary_directory command empties the directory and all subdirectories. Please note that this command has a safety measure built in, which is that it will only remove the directory if there is a CMakeCache.txt file in the top level directory. This was intended to prevent CTest from mistakenly removing a non-build directory.<br />
<br />
The rest of the block contains the calls to the actual CTest functions. Each of them corresponds to a CTest -D option. For example, instead of:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -D ExperimentalBuild<br />
</syntaxhighlight><br />
<br />
the script would contain:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest_start (Experimental)<br />
ctest_build (BUILD "${CTEST_BINARY_DIRECTORY}" RETURN_VALUE res)<br />
</syntaxhighlight><br />
<br />
Each step yields a return value, which indicates if the step was successful. For example, the return value of the Update stage can be used in a continuous dashboard to determine if the rest of the dashboard should be run.<br />
<br />
To demonstrate some advantages of using extended CTest scripting, let us examine a more advanced CTest script. This script drives testing of an application called Slicer. Slicer uses CMake internally, but it drives the build process through a series of Tcl scripts. One of the problems of this approach is that it does not support out-of-source builds. Also, on Windows certain modules come pre-built, so they have to be copied to the build directory. To test a project like that, we would use a script like this:<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required (VERSION 2.2)<br />
<br />
# set the dashboard specific variables -- name and notes<br />
set (CTEST_SITE "dash11.kitware")<br />
set (CTEST_BUILD_NAME "Win32-VS71")<br />
set (CTEST_NOTES_FILES<br />
"${CTEST_SCRIPT_DIRECTORY}/${CTEST_SCRIPT_NAME}")<br />
<br />
# do not let any single test run for more than 1500 seconds<br />
set (CTEST_TIMEOUT "1500")<br />
<br />
# set the source and binary directories<br />
set (CTEST_SOURCE_DIRECTORY "C:/Dashboards/MyTests/slicer2"}<br />
set (CTEST_BINARY_DIRECTORY "$(CTEST_SOURCE_DIRECTORY}-build")<br />
<br />
set (SLICER_SUPPORT<br />
"//Dash11/Shared/Support/SlicerSupport/Lib")<br />
set (TCLSH "${SLICER_SUPPORT}/win32/bin/tclsh84.exe")<br />
<br />
# set the complete update, configure and build commands<br />
set (CTEST_UPDATE_COMMAND<br />
"C:/Program Files/TortoiseCVS/cvs.exe")<br />
set (CTEST_CONF IGURE_COMMAND<br />
"\"${TCLSH}\"<br />
\"$(CTEST_BINARY_DIRECTORY}/Scripts/genlib.tcl\"")<br />
set (CTEST_BUILD COMMAND<br />
"\"${TCLSH}\"<br />
\"${CTEST_BINARY_DIRECTORY}/Scripts/cmaker.tcl\**)<br />
<br />
# clear out the binary tree<br />
file (WRITE "${CTEST_BINARY_DIRECTORY}/CMakeCache.txt"<br />
"// Dummy cache just so that ctest will wipe binary dir")<br />
ctest_empty_binary_directory (${CTEST_BINARY_DIRECTORY})<br />
<br />
# special variables for the Slicer build process<br />
set (ENV{MSVC6} "O")<br />
set (ENV{GENERATOR} "Visual Studio 7 .NET 2003")<br />
set (ENV{MAKE} "devenv.exe ")<br />
set (ENV (COMPILER_PATH)<br />
"C:/Program Files/Microsoft Visual Studio .NET<br />
2003/Common7/Vc7/bin")<br />
set (ENV(CVS} "$({CTEST_UPDATE_COMMAND}")<br />
<br />
# start and update the dashboard<br />
ctest_start (Nightly)<br />
ctest_update (SOURCE "${CTEST_SOURCE_DIRECTORY}")<br />
# define a macro to copy a directory<br />
macro (COPY_DIR sredir destdir)<br />
exec_program ("${CMAKE_EXECUTABLE_NAME)" ARGS<br />
"-E copy_directory \"${sredir}\"\"${destdir}\"")<br />
endmacro ()<br />
<br />
# Slicer does not support out of source builds so we<br />
# first copy the source directory to the binary directory<br />
# and then build it<br />
copy_dir ("${CTEST_SOURCE_DIRECTORY}"<br />
"${CTEST_BINARY_DIRECTORY}")<br />
<br />
# copy support libraries that slicer needs into the binary tree<br />
copy_dir ("${SLICER_SUPPORT}"<br />
"${CTEST_BINARY_DIRECTORY}/Lib")<br />
<br />
# finally do the configure, build, test and submit steps<br />
ctest_configure (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_build (BUILD "${CTEST_BINARY_DIRECTORY }")<br />
ctest_test (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_submit ()<br />
</syntaxhighlight><br />
<br />
With extended CTest scripting we have full control over the flow, so we can perform arbitrary commands at any point For example, after performing an update of the project, the script copies the source tree into the build directory. This allows it to do an "out-of-source" build.<br />
<br />
<br />
===Setting up a Dashboard Server===<br />
<br />
For many projects, using Kitware's my.cdash.org dashboard hosting will be sufficient. If that is the case for you, then you can skip this section. If you wish to setup your own server, then this section will walk you through the process. There are a few options for what to run on the server to process the dashboard results. The preferred option is to use CDash, a dashboard server based on PHP, MySQL, CSS, and XSLT. Predecessors to CDash such as DART 1 and DART 2 can also be used. Information on the DART systems can be found at http://www.itk.org/Dart/HTML/lndex.shtml.<br />
<br />
<br />
====CDash Server====<br />
<br />
CDash is a dashboard server developed by Kitware that is based on the common "LAMP stack." It makes use of PHP, CSS, XSL, MySQL/PostgreSQL, and of course your web server (normally Apache). CDash takes the dashboard submissions as XML and stores them in an SQL database (currently MySQL and PostgreSQL are supported). When the web server receives requests for pages, the PHP scripts extract the relevant data from the database and produce XML that is sent to XSL templates, that in turn convert it into HTML. CSS is used to provide the overall look and feel for the pages. CDash can handle large projects, and has been hosting up to 30 projects on a reasonable web-server, with just over 200 million records and about 89 Gigabytes in the database, stored on a separate database-server machine.<br />
<br />
<br />
=====Server requirements=====<br />
<br />
* MySQL (5.x and higher) or PostgreSQL (8.3 and higher)<br />
* PHP (5.0 recommended)<br />
* XSL module for PHP (apt-get install php5-xsl)<br />
* cURL module for PHP<br />
* GD module for PHP<br />
<br />
=====Gettting CDash=====<br />
<br />
You can get CDash from the www.cdash.org website, or you can get the latest code from SYN using the following command:<br />
<br />
<syntaxhighlight lang="text"><br />
svn co https://www.kitware.com/svn/CDash/trunk CDash<br />
</syntaxhighlight><br />
<br />
=====Quick installation=====<br />
<br />
1. Unzip or checkout CDash in your webroot directory on the server. Make sure the web server has read permission to the files<br />
<br />
2. Create a cdash/config.local.php and add the following lines, adapted for your server configuration:<br />
<br />
<syntaxhighlight lang="text"><br />
// Hostname of the database server<br />
SCDASH_DB_HOST = 'localhost';<br />
<br />
// Login for database access<br />
SCDASH_DB_LOGIN = 'root';<br />
<br />
// Password for database access<br />
SCDASH_DB_PASS = '';<br />
<br />
// Name of the database<br />
SCDASH_DB_NAME = 'cdash';<br />
<br />
// Database type<br />
SCDASH_DB_TYPE = 'mysql';<br />
</syntaxhighlight><br />
<br />
3. Point your web browser to the install.php script:<br />
<br />
<syntaxhighlight lang="text"><br />
http://mywebsite.com/CDash/install.php<br />
</syntaxhighlight><br />
<br />
4. Follow the installation instructions<br />
<br />
5. When the installation is done, add the following line in the config.local.php to ensure the in stallation script is no longer accessible<br />
<br />
<syntaxhighlight lang="text"><br />
$CDASH_PRODUCTION_MODE = true;<br />
</syntaxhighlight><br />
<br />
<br />
=====Testing the installation=====<br />
<br />
In order to test the installation of the CDash server, you can download a small test project and test the submission to CDash by following these steps:<br />
<br />
1. Download and unzip the test project at:<br />
<br />
<syntaxhighlight lang="text"><br />
http://www.cdash.org/download/CDashTest.zip<br />
</syntaxhighlight><br />
<br />
2. Create a CDash project named "test" on your CDash server (see 10.7 Producing Test Dashboards)<br />
<br />
3. Download the CTestConfig.cmake file from the CDash server, replacing the existing one in CDashTest with the one from your server<br />
<br />
4. Run CMake on CDashTest to configure the project<br />
<br />
5. Run:<br />
<br />
<syntaxhighlight lang="text"><br />
make Experimental<br />
</syntaxhighlight><br />
<br />
6. Go to the dashboard page for the "test" project, you should see the submission in the Experimental section.<br />
<br />
<br />
====Advanced Server Management====<br />
<br />
=====Project Roles : CDash supports three role levels for users:=====<br />
<br />
* Normal users are regular users with read and/or write access to the project's code repository.<br />
* Site maintainers are responsible for periodic submissions to CDash .<br />
* Project administrators have reserved privileges to administer the project in CDash.<br />
<br />
The first two levels can be defined by the users themselves. Project administrator access must be granted by another administrator of the project, or a CDash server administrator.<br />
<br />
In order to change the current role for a user:<br />
<br />
# Select [Manage project roles] in the administration section<br />
# If you have more than one project, select the appropriate project<br />
# In the "current users" section, change the role for a user<br />
# Click "update" to update the current role<br />
# In order to completely remove a user from a project, click "remove"<br />
# If the CVS login is not correct it canbe changed from this page. Note that users can also change their CVS login manually from their profile<br />
<br />
In order to add a current role for a user:<br />
<br />
# Select [Manage project roles] in the administration section<br />
# Then, if you have more than one project, select the appropriate project<br />
# In the "Add new user" section type the first letters of the first name, last name, or email address of the user you want to add. Or type '%' in order to show all the users registered in CDash<br />
# Select the appropriate user's role<br />
# Optionally enter the user's CVS login<br />
# Click on "add user"<br />
<br />
<<Figure 10.5 : Project Role management page in CDash>><br />
<br />
<br />
=====Importing users : to batch import a list of current users for a given project=====<br />
<br />
1. Click on [manage project role] in the administration section<br />
2. Select the appropriate project<br />
3. Click "Browse" to select a CVS users file.<br />
4. The file should be formatted as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
cvsuser:email:first_name last_name<br />
</syntaxhighlight><br />
<br />
5. Click "import"<br />
6. Make sure the reported names and email addresses are correct; deselect any that should not be imported<br />
7. Click on "Register and send email". This will automatically register the users, set a random password and send a registration request to the appropriate email addresses.<br />
<br />
<br />
=====Google Analytics=====<br />
<br />
Usage statistics of the CDash server can be assessed using Google Analytics. In order to setup google analytics:<br />
<br />
# Go to http://www.google.com/analytics/index.html<br />
# Setup an account, if necessary<br />
# Add a website project<br />
# Login into CDash as the administrator of a project<br />
# Click on "Edit Project"<br />
# Add the code from Google into the Google Analytics Tracker (i.e. UA-43XXXX-X) for your project<br />
<br />
<br />
=====Submission backup=====<br />
<br />
CDash backups all the incoming XML submissions and places them in the backup directory by default. The default timeframe is 48 hours. The timeframe can be changed in the config.local.php as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
$CDASH_BACKUP_TIMEFRAME=72;<br />
</syntaxhighlight><br />
<br />
If projects are private it is recommended to set the backup directory outside of the apache root directory to make sure that nobody can access the XML files, or to add the following lines to the .htaccess in the backup directory:<br />
<br />
<syntaxhighlight lang="text"><br />
<Files *><br />
order allow,deny<br />
deny from all<br />
</Files><br />
</syntaxhighlight><br />
<br />
Note that the backup directory is emptied only when a new submission arrives. If necessary, CDash can also import builds from the backup directory.<br />
<br />
# Log into CDash as administrator<br />
# Click on [Import from backups] in the administration section<br />
# Click on "Import backups"<br />
<br />
<br />
====Build Groups====<br />
<br />
Builds can be organized by groups. In CDash, three groups are defined automatically and cannot be removed: Nightly, Continuous and Experimental. These groups are the same as the ones imposed by CTest. Each group has an associated description that is displayed when clicking on the name of the group on the main dashboard.<br />
<br />
<br />
=====To add a new group:=====<br />
<br />
# Click on [manage project groups] in the administration section<br />
# Select the appropriate project<br />
# Under the "create new group" section enter the name of the new group<br />
# Click on "create group". The newly created group appears at the bottom of the current dashboard<br />
<br />
<br />
=====To order groups:=====<br />
<br />
# Click on [manage project groups] in the administration section<br />
# Select the appropriate project<br />
# Under the "Current Groups" section, click on the [up] or [down] links. The order displayed in this page is exactly the same as the order on the dashboard<br />
<br />
<br />
=====To update group description:=====<br />
<br />
# Click on [manage project groups] in the adm inistration section<br />
# Select the appropriate project<br />
# Under the "Current Groups" section, update or add a description in the field next to the [up]/[down] links<br />
# Click "Update Description" in order to commit your changes<br />
<br />
By default, a build belongs to the group associated with the build type defined by CTest, i.e. a nightly build will go in the nightly section. CDash matches a build by its name, site, and build type. For instance, a nightly build named "Linux-gcc-4.3" from the site "midworld.kitware" will be moved to the nightly section unless a rule on "Linux-gcc-4.3"-"midworld.kitware"-"Nightly" is defined. There are two ways to move a build into a given group by defining a rule: Global Move and Single Move.<br />
<br />
<br />
=====Global move all ows moving builds in batch.=====<br />
<br />
# Click on [manage project groups] in the administration section.<br />
# Select the appropriate project (if more than one).<br />
# Under "Global Move" you will see a list of the builds submitted in the past 7 days (without duplicates). Note that expected builds are also shown, even if they have not been submitting for the past 7 days.<br />
# You can narrow your search by selecting a spec ific group (default is All).<br />
# Select the build s to move. Hold "shift" in order to select multiple builds .<br />
# Select the target group. This is mandatory.<br />
# Optionally check the "expected" box if you expect the builds to be submitted on a daily basis. For more information on expected builds, see the "Expected builds" section below.<br />
# Click "Move Selected Builds to Group" to move the groups.<br />
<br />
<br />
=====Single move allows modifying only a particular build.=====<br />
<br />
If logged in as an administrator of the project, a small folder icon is displayed next to each build on the main dashboard page. Clicking on the icon shows some options for each build. In particular, project administrators can mark a build as expected, move a build to a specific group, or delete a bogus build.<br />
<br />
Expected builds: Project administrators can mark certain builds as expected. That means builds are expected to submit daily. This all ows you to quickly check if a build has not been submitting on today's dash board, or to quickly assess how long the build has been missing by clicking on the info icon on the main dashboard.<br />
<br />
<<Figure 10.6: Information regarding a build from the main dash board page>><br />
<br />
If an expected build was not submited the previous day and the option "Email Build Missing" is checked for the project, an email will be sent to the site maintainer and project administrator to alert them (see the Sites section for more information).<br />
<br />
<br />
====Email====<br />
<br />
CDash sends email to developers and project administrators when a failure occurs for a given build. The configuration of the email feature is located in three places: administration section, and the project's groups section .<br />
<br />
In the the config.local.php file, the project config.local.php, two variables are defined to specify the email address from which email is sent and the reply address. Note that the SMTP server can not be defined in the current version of CDash, it is assumed that a local email server is running on the machine.<br />
<br />
<syntaxhighlight lang="text"><br />
$CDASH_EMAIL_FROM = 'admin@mywebsite.com';<br />
$CDASH_EMAIL_REPLY = 'noreply@mywebsite.com';<br />
</syntaxhighlight><br />
<br />
<<Figure 10.7: Build Group Configuration Page>><br />
<br />
In the email configuration section of the project, several parameters can be tuned to control the email feature. These parameters were described in the previous section, "Adding CDash Support to a Project" .<br />
<br />
In the "build groups" admini stration section of a project, an administrator can decide if emails are sent to a specific group, or if only a summary email should be sent. The summary email is sent for a given group when at least one build is failing on the current day.<br />
<br />
<br />
====Sites====<br />
<br />
CDash refers to a site as an individual machine submitting at least one build to a given project. A site might submit multiple builds (e.g. nightly and continuous) to multiple projects stored in CDash.<br />
<br />
In order to see the site description, click on the name of the site from the main dashboard page for a project. The description of a site includes information regarding the processor type and speed, as well as the amount of memory available on the given machine. The description of a site is automatically sent by CTest, however in some cases it might be required to manually edit it. Moreover, if the machine is upgraded, i.e. the memory is upgraded; CDash keeps track of the history of the description, allowing users to compare performance before and after the upgrade.<br />
<br />
Sites usually belong to one maintainer, responsible for the submissions to CDash. It is important for site maintainers to be warned when a site is not submitting as it could be related to a configuration issue. In order to claim a site, a maintainer should<br />
<br />
# Log into CDash<br />
# Click on a dashboard containing at least one build for the site<br />
# Click on the site name to open the description of the site<br />
# Click on [claim this site]<br />
<br />
Once a site is claimed, its maintainer will receive emails if the client machine does not submit for an unknown reason, assuming that the site is expected to submit nightly. Furthermore, the site will appear in the "My Sites" section of the maintainer's profile, facilitating a quick check of the site's status.<br />
<br />
Another feature of the site page is the pie chart showing the load of the machine. Assuming that a site submits to multiple projects, it is usually useful to know if the machine has room for other submissions to CDash. The pie chart gives an overview of the machine submission time for each project.<br />
<br />
====Graphs====<br />
<br />
CDash curently plots three types of graph. The graphs are generated dynamically from the database records, and are interactive.<br />
<br />
<<Figure 10.8: Pie chart showing how much time is spent by a given site on building CDash projects>><br />
<br />
<<Figure 10.9: Map showing the location of the different sites building>><br />
<br />
<<Figure 10.10: Example of build time over time>><br />
<br />
The build time graph displays the time required to build a project over time. In order to display the graph you need to:<br />
<br />
# Go to the main dashboard for the project.<br />
# Click on the build name you want to track.<br />
# On the build summary page, click on [Show Build Time Graph].<br />
<br />
The test time graphs display the time to run a specific test as well as its status (passed/Failed) over time. To display it:<br />
<br />
# Go to the main dashboard for a project.<br />
# Click on the number of test passed or failed.<br />
# From the list of tests, click on the status of the test.<br />
# Click on [Show Test Time Graph] and/or [Show Failing/Passing Graph].<br />
<br />
<br />
====Adding Notes to a Build====<br />
<br />
In some cases, it is useful to inform other developers that someone is currently looking at the errors for a build. CDash implements a simple note mechanism for that purpose:<br />
<br />
# Login to CDash.<br />
# On the dashboard project page, click on the build name that you would like to add the note to.<br />
# Click on the [Add a Note to this Build] link, located next to the current build matrix (see thumbnail).<br />
# Enter a short message that will be added as a note.<br />
# Select the status of the note: Simple note, Fix in progress Fixed.<br />
# Click on "Add Note".<br />
<br />
<br />
====Logging====<br />
<br />
CDash supports an internal logging mechanism using the error_log() PHP function. Any critical SQL errors are logged. By default, the CDash log file is located in the backup directory under the name cdash.log. The location of the log file can be modified by changing the variable in the config.local.php configuration file.<br />
<br />
<syntaxhighlight lang="text"><br />
$CDASH_BACKUP_DIRECTORY='/vat/temp/cdashbackup/log';<br />
</syntaxhighlight><br />
<br />
The log file can be accessed directly from CDash if the log file is in the standard location:<br />
<br />
# Log into CDash as administrator.<br />
# Click on [CDash logs] in the administration section.<br />
# Click on cdash.log to see the log file.<br />
<br />
CDash 2.0 introduced a log file rotation feature.<br />
<br />
<br />
====Test Timing====<br />
<br />
CDash supports checks o n the duration o f tests. CDash keeps the current weighted average o f the mean and<br />
standard deviation for the time each test takes to run in the database. In order to keep the computation as<br />
efficient as possible the fol lowing formula is used, which only involves the previous build.<br />
<br />
<syntaxhighlight lang="text"><br />
// alpha is the current "window" for the computation<br />
// By default, alpha is 0.3<br />
newMean = (1-alpha) * oldMean + alpha * currentTime<br />
<br />
newSD = sqrt((1-alpha) * SD * SD +<br />
alphas (currentTime-newMean) * (currentTime-newMean)<br />
</syntaxhighlight><br />
<br />
A test is defined as having failed timing based on the following logic :<br />
<br />
<syntaxhighlight lang="text"><br />
if previousSD < thresholdSD then previousSD = thresholdSD<br />
if currentTime > previousMean + multiplier * previousSD then fail<br />
</syntaxhighlight><br />
<br />
<br />
====Mobile Support====<br />
<br />
Since CDash is written using template layers via XSLT, developing new layouts is as simple as adding new rendering templates. As a demonstration, an iPhone web template is provided with the current version of CDash.<br />
<br />
<syntaxhighlight lang="text"><br />
</syntaxhighlight><br />
<br />
The main page shows a list of the public projects hosted on the server. Clicking on the name of a project loads its current dashboard. In the same manner, clicking on a given build displays more detailed information about that build. As of this writing, the ability to login and to access private sections of CDash are not supported with this layout.<br />
<br />
<br />
====Backing up CDash====<br />
<br />
All of the data (except the logs) used by CDash is stored in its database. It is important to backup the database regularly, especially so before performing a CDash upgrade. There are a couple of ways to backup a MySQL database. The easiest is to use the mysqldump<pre>http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html</pre> command:<br />
<br />
<syntaxhighlight lang="text"><br />
mysqldump -r cdashbackup.sql cdash<br />
</syntaxhighlight><br />
<br />
If you are using My ISAM tables exclusively, you can copy the CDash directory in your MySQL data directory. Note that you need to shutdown MySQL before doing the copy so that no file could be changed during the copy. Similarly to MySQL, PostGreSQL has a pg_dump utility:<br />
<br />
<syntaxhighlight lang="text"><br />
pg_dump -U postgreSQL_user cdash > cdashbackup.sql<br />
</syntaxhighlight><br />
<br />
<br />
====Upgrading CDash====<br />
<br />
When a new version of CDash i s released or if you decide to update from the SVN repository, CDash will<br />
warn you on the front page i f the current database needs to be upgraded. When upgrading to a new release<br />
version the following steps should be taken:<br />
<br />
# Backup your SQL database (see previous section).<br />
# Backup your config.local.php (or config.php) configuration files.<br />
# Replace your current cdash directory with the latest version and copy the config.local.php in the cdash directory.<br />
# Navigate your browser to your CDash page. (e.g. http://localhost/CDash).<br />
# Note the version number on the main page, it should match the version that you are upgrading to.<br />
# The following message may appear: "The current database shema doesn't match the version of CDash you are running, upgrade your database structure in the Administration panel of CDash." This is a helpful reminder to perform the following steps.<br />
# Login to CDash as administrator.<br />
# In the 'Administration' section, click on '[CDash Maintenance]'.<br />
# Click on 'Upgrade CDash': this process might take some time depending on the size of your database (do not close your browser).<br />
#* Progress messages may appear wh ile CDash performs the upgrade.<br />
#* If the upgrade process takes too long you can check in the backup/cdash.log file to see where the process is taking a long time and/or failing.<br />
#* It has been reported that on some systems the spinning icon never turns into a check mark. Please check the cdash.log for the "Upgrade done." string if you feel that the upgrade is taking too long.<br />
#* On a 50GB database the upgrade might take up to 2 hours.<br />
# Some web browsers might have issues when upgrading (with some javascript variables not being passed<br />
correctly), in that case you can perform individual updates. For example, upgrading from CDash 1-2 to 1-4:<br />
<br />
<syntaxhighlight lang="text"><br />
http://mywebsite.com/CDash/backwardCompatibilityTools.php?updatede-1-4=1<br />
</syntaxhighlight><br />
<br />
<<Figure 10.11: Example of dashboard on the iPhone>><br />
<br />
<br />
====CDash Maintenance====<br />
<br />
Database maintenance: we recommend that you perform database optimization (reindexing, purging, etc.) regularly to maintain a stable database. MySQL has a utility called mysqlcheck, and PostgreSQL has several utilities such as vacuumdb.<br />
<br />
Deleting builds with incorrect dates: some builds might be submitted to CDash with the wrong date, either because the date in the XML file is incorrect or the timezone was not recognized by CDash (mainly by PHP). These builds will not show up in any dashboard because the start time is bogus. In order to remove these builds:<br />
<br />
# Login to CDash as administrator.<br />
# Click on [CDash maintenance] in the administration section.<br />
# Click on 'Delete builds with wrong start date'.<br />
<br />
Recompute test timing: if you just upgraded CDash you might notice that the current submissions are showing a high number of failing test due to time defects. This is because CDash does not have enough sample points to compute the mean and standard deviation for each test, in particular the standard deviation might be very small (probably zero for the first few samples). You should turn the "enable test timing" off for about a week, or until you get enough build submissions and CDash has calculated an approximate mean and standard deviation for each test time.<br />
<br />
The other option is to force CDash to compute the mean and standard deviation for each test for the past few days. Be warned that this process may take a long time, depending on the number of test and projects involved. In order to recompute the test timing:<br />
<br />
# Login to CDash as administrator.<br />
# Click on [CDash maintenance] in the administration section.<br />
# Specify the number of days (default is 4) to recompute the test timings for.<br />
# Click on "Compute test timing". When the process is done the new mean, standard deviation, and status should be updated for the tests submitted during this period.<br />
<br />
<br />
=====Automatic build removal=====<br />
<br />
In order to keep the database at a reasonable size, CDash can automatically purge old builds. There are currently two ways to setup automatic removal of builds: without a cronjob, edit the config.local.php and add/edit the following line<br />
<br />
<syntaxhighlight lang="text"><br />
$CDASH_AUTOREMOVE_BUILDS='1';<br />
</syntaxhighlight><br />
<br />
CDash will automatically remove builds on the first submission of the day. Note that removing builds might add an extra load on the database, or slow down the current submission process if your database is large and the number of submissions is high. If you can use a cronjob the PHP command line tool can be used to trigger build removals at a convenient time. For example, removing the builds for all the projects at 6am every Sunday:<br />
<br />
<syntaxhighlight lang="text"><br />
0 6 * * 0 php5 /var/www/CDash/autoRemoveBuilds.php all<br />
</syntaxhighlight><br />
<br />
Note that the 'all' parameter can be changed to a specific project name in order to purge buil ds from a single project.<br />
<br />
<br />
=====CDash XML Schema=====<br />
<br />
The XML parsers in CDash can be easily extended to support new features. The current XML schemas generated by CTest, and their features as described in the book, are located at:<br />
<br />
<syntaxhighlight lang="text"><br />
http://public.kitware.com/Wiki/CDash:XML<br />
</syntaxhighlight><br />
<br />
====Subprojects====<br />
<br />
CDash (versions 1.4 and later) supports splitting projects into subprojects. Some of the subprojects may in turn depend on other subprojects. A typical real life project consists of libraries, executables, test suites, documentation, web pages, and installers. Organizing your project into well-defined subprojects and presenting<br />
the results of nightly builds on a CDash dashboard can help identify where the problems are at different levels of granularity.<br />
<br />
A project with subprojects has a different view for its top level CDash page than a project without any. It<br />
contains a summary row for the project as a whole, and then one summary row for each subproject.<br />
<br />
<br />
=====Organizing and defining subprojects=====<br />
<br />
To add subproject organization to your project, you must: (1) define the subprojects for CDash, so that it knows how to display them properly and (2) use build scripts with CTest to submit subproject builds of your project. Some (re-)organization of your project's CMakeLists.txt files may also be necessary to allow building of your project by subprojects.<br />
<br />
<<Figure 10.12: Main project page with subprojects>><br />
<br />
There are two ways to define subprojects and their dependencies: interactively in the CDash GUI when logged in as a project administrator, or by submitting a Project.xml file describing the subprojects and dependencies.<br />
<br />
<br />
=====Adding Subprojects Interactively=====<br />
<br />
As a project administrator, a "Manage subprojects" button will appear for each of your projects on the My CDash page. Clicking the Manage Subprojects button opens the manage subproject page, where you may add new subprojects or establish dependencies between existing subprojects for any project that you are an administrator of. There are two tabs on this page: one for viewing the current subprojects along with their dependencies, and one for creating new subprojects.<br />
<br />
To add subprojects, for instance two subprojects called Exes and Libs, and to make Exes depend on Libs, the following steps are necessary :<br />
<br />
* Click the "Add a subproject" tab.<br />
* Type "Exes" in the "Add a subproject" edit field.<br />
* Click the "Add subproject" button.<br />
* Click the "Add a subproj ect" tab.<br />
* Type "Libs" in the "Add a subproject" edit field.<br />
* Click the "Add Subproject" button .<br />
* In the "Exes" row of the "Current Subprojects" tab, choose "Libs" from the "Add dependency" drop downlist and click the "Add dependency" button.<br />
<br />
To remove a dependency or a subproject, click on the "X" next to the item you wish to delete.<br />
<br />
<br />
=====Adding Subprojects Automatically=====<br />
<br />
Another way to define CDash subprojects and their dependencies is to submit a "Project.xml" file along with the usual submission files that CTest sends when it submits a build to CDash. To define the same two subprojects as in the interactive example above (Exes and Libs) with the same dependency (Exes depend on Libs), the Project.xml file would look like the following example:<br />
<br />
<syntaxhighlight lang="text"><br />
<Project name="Tutorial"><br />
<SubProject name="Libs"></SubProject><br />
<SubProject name="Exes"><br />
<Dependency name="Libs"><br />
</SubProject><br />
</Project><br />
</syntaxhighlight><br />
<br />
Once the Project.xml file is written or generated, it can be submitted to CDash from a ctest -S script using the new FILES argument to the ctest_submit command, or directly from the ctest command line in a build tree configured for dashboard submission.<br />
<br />
From inside a ctest -S script:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest_submit(FILES "${CTEST_BINARY_DIRECTORY}/Project.xml")<br />
</syntaxhighlight><br />
<br />
From the command line:<br />
<br />
<syntaxhighlight lang="text"><br />
cd ../Project-build<br />
ctest --extra-submit Project.xml<br />
</syntaxhighlight><br />
<br />
CDash will automatically add subprojects and dependencies according to the Project.xml file. CDash will also remove any subprojects or dependencies not defined in the Project.xml file. Additionally, if the same Project.xml is submitted multiple times, the second and subsequent submissions will have no observable effect: the first submission adds/modifies the data, the second and later submissions send the same data, so no changes are necessary. CDash tracks changes to the subproject definitions over time to allow for projects to evolve. If you view dashboards from a past date, CDash will present the project/subproject views according to the subproject definitions in effect on that date.<br />
<br />
<br />
====Using ctest_submit with PARTS and FILES====<br />
<br />
In CTest version 2.8 and later, the ctest_submit() (page 354) command supports new PARTS and FILES arguments. With PARTS, you can send any subset of the xml files with each ctest_submit call. Previously, all parts would b e sent with any call to ctest_submit. Typically, the script would wait until all dashboard stages were complete and then call ctest_submit once to send the results of all stages at the end of the run. Now, a script may call ctest_submit with PARTS to do partial submissions of subsets of the results. For example, you can submit configure results after ctest_configure() (page 352), build results after ctest_build() (page 351), and test results after ctest_test() (page 355). This allows for information to be posted as the builds progress.<br />
<br />
With FILES, you can send arbitrary XML files to CDash. In addition to the standard build result XML files that CTest sends, CDash also handles the new Project.xml file that describes subprojects and dependencies. Prior to the addition of the ctest_submit PARTS handling, a typical dashboard script would contain a single ctest_submit() call on its last line<br />
<br />
<syntaxhighlight lang="text"><br />
ctest_start (Experimental)<br />
ctest_update (SOURCE "${CTEST_SOURCE_DIRECTORY}")<br />
ctest_configure (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_build (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_test (BUILD "${CTEST_BINARY_ DIRECTORY}")<br />
ctest_submit ()<br />
</syntaxhighlight><br />
<br />
Now, submissions can occur incrementally, with each part of the submission sent piecemeal as it becomes available:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest_start (Experimental)<br />
ctest_update (SOURCE "${CTEST_SOURCE_DIRECTORY}")<br />
ctest_configure (BUILD "${CTEST_BINARY DIRECTORY}")<br />
ctest_submit (PARTS Update Configure Notes)<br />
<br />
ctest_build (BUILD "*${CTEST_BINARY_DIRECTORY}" APPEND)<br />
ctest_submit (PARTS Build)<br />
<br />
ctest_test (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_submit (PARTS Test)<br />
</syntaxhighlight><br />
<br />
Submitting incrementally by parts means that you can inspect the results of the configure stage live on the CDash dashboard while the build is still in progress. Likewise, you can inspect the results of the build stage live while the tests are still running. When submitting by parts, it's important to use the APPEND keyword in the ctest_build command. If you don't use APPEND, then CDash will erase any existing build with the same build name, site name, and build stamp when it receives the Build.xml file.<br />
<br />
<br />
====Splitting Your Project into Multiple Subprojects====<br />
<br />
One ctest_build() (page 351) invocation that builds everything, followed by one ctest_test() (page 355) invocation that tests everything is sufficient for a project that has no subprojects, but if you want to submit results on a per-subproject basis to CDash, you will have to make some changes to your project and test scripts. For your project you need to identify what targets are part of what sub-projects. If you organize your CMakeLists files such that you have a target to build for each subproject, and you can derive (or look up) the name of that target based on the subproject name, then revising your script to separate it into multiple smaller configure/build/test chunks should be relatively painless. To do this, you can modify your CMakeLists files in various ways depending on your needs. The most common changes are listed below.<br />
<br />
<br />
=====CMakelists.txt modifications=====<br />
<br />
* Name targets the same as subprojects, base target names on subproject names, or provide a look up mechanism to map from subproject name to target name.<br />
* Possibly add custom targets to aggregate existing targets into subprojects, using add_dependencies to say which existing targets the custom target depends on.<br />
* Add the LABELS target property to targets with a value of the subproject name.<br />
* Add the LABELS test property to tests with a value of the subproject name.<br />
<br />
Next, you need to modify your CTest scripts that run your dashboards. To split your one large monolithic<br />
build into smaller subproject builds, you can use a foreach loop in your CTest driver script. To help you<br />
iterate over your subprojects, CDash provides a variable named CTEST_PROJECT_SUBPROJECTS in<br />
CTestConfig.cmake. Given the above example, CDash produces a variable like this:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_PROJECT_SUBPROJECTS Libs Exec)<br />
</syntaxhighlight><br />
<br />
CDash orders the elements in this list such that the independent subprojects (that do not depend on any other subprojects) are first, followed by subprojects that depend only on the independent subprojects, and after that subprojects that depend on those. The same logic continues until all subprojects are listed exactly once in this list in an order that makes sense for building them sequentially, one after the other.<br />
<br />
To facilitate building just the targets associated with a subproject, use the variable named CTEST_BUILD_TARGET to tell:command:ctest_build what to build. To facilitate running just the tests as sociated with a subproject, assign the LABELS test property to your tests and use the new INCLUDE_LABEL argument toctest_test() (page 355).<br />
<br />
<br />
=====ctest driver script modifications=====<br />
<br />
* Iterate over the subprojects in dependency order (from independent to most dependent...).<br />
* Set the SubProject and Label global properties - CTest uses these properties to submit the results to the correct subproject on the CDash server.<br />
* Build the target(s) for this subproject: compute the name of the target to build from the subproject name, set CTEST_BUILD_TARGET, call ctest_build.<br />
* Run the tests for this subproject using the INCLUDE or INCLUDE_LABEL arguments to ctest_ctest.<br />
* Use ctest_submit with the PARTS argument to submit partial results as they complete.<br />
<br />
<br />
To illustrate this, the following example shows the changes required to split a build into smaller pieces. Assume that the subproject name is the same as the target name required to build the subproject's components. For example, here is a snippet from CMakeLists.txt, in the hypothetical Tutorial project. The only additions necessary (since the target names are the same as the subproject names) are the calls to set_property() (page 329) for each target and each test.<br />
<br />
<syntaxhighlight lang="text"><br />
# "Libs" is the library name (therefore a target name) and<br />
# the subproject name<br />
add_library (Libs ...)<br />
set_property (TARGET Libs PROPERTY LABELS Libs)<br />
add_test (LibsTest1 ...)<br />
add_test (LibsTest2 ...)<br />
set_property (TEST LibsTest1 LibsTest2 PROPERTY LABELS Libs)<br />
<br />
# "Exes" is the executable name (therefore a target name)<br />
# and the subproject name<br />
add_executable (Exes ...)<br />
target_link_libraries (Exes Libs)<br />
set_property (TARGET Exes PROPERTY LABELS Exes)<br />
add_test (ExesTest1 ...)<br />
add_test (ExesTest2 ...)<br />
set_property (TEST ExesTest1 ExesTest2 PROPERTY LABELS Exes)<br />
</syntaxhighlight><br />
<br />
Here is an example of what the CTest driver script might look like before and after organizing this project into subprojects. Before the changes<br />
<br />
<syntaxhighlight lang="text"><br />
ctest_start (Experimental)<br />
ctest_update (SOURCE "${CTEST_SOURCE_DIRECTORY}")<br />
ctest_configure (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
# builds *all* targets: Libs and Exes<br />
ctest_build (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
</syntaxhighlight><br />
<br />
After the changes:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest_start (Experimental)<br />
ctest_update (SOURCE "${CTEST_SOURCE_DIRECTORY}")<br />
ctest_submit (PARTS Update Notes)<br />
<br />
# to get CTEST_PROJECT_SUBPROJECTS definition:<br />
include ("${CTEST_SOURCE_DIRECTORY}/CTestConfig.cmake")<br />
foreach (subproject ${CTEST_PROJECT_SUBPROJECTS})<br />
set_property (GLOBAL PROPERTY SubProject ${subproject})<br />
set_property (GLOBAL PROPERTY Label ${subproject})<br />
<br />
ctest_configure (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_submit (PARTS Configure)<br />
<br />
set (CTEST_BUILD_TARGET "${subproject}")<br />
ctest_buiid (BUILD "${CTEST_BINARY_DIRECTORY}" APPEND)<br />
# builds target ${CTEST_BUILD_TARGET}<br />
ctest_submit (PARTS Build)<br />
ctest_test (BUILD "${CTEST_BINARY_DIRECTORY}"<br />
INCLUDE_LABEL "${subproject}"<br />
)<br />
<br />
# runs only tests that have a LABELS property matching<br />
# "${subproject}"<br />
ctest_submit (PARTS Test)<br />
endforeach ()<br />
</syntaxhighlight><br />
<br />
In some projects, more than one ctest_build step may be required to build all the pieces of the subproject. For example, in Trilinos, each subproject builds the ${subproject}_libs target, and then builds the all target to build all the configured executables in the test suite. They also configure dependencies such that only the executables that need to be built for the currently configured packages build when the all target is built.<br />
<br />
Normally, if you submit multiple Build.xml files to CDash with the same exact build stamp, it will delete the existing entry and add the new entry in its place. In the case where multiple ctest_build steps are required, each with their own ctest_submit (PARTS Build) call, use the APPEND keyword argument in all of the ctest_build calls that belong together. The APPEND flag tells CDash to accumulate the results from multiple submissions and display the aggregation of all of them in one row on the dashboard. From CDash's perspective, multiple ctest_build calls (with the same build stamp and subproject and APPEND turned on) result in a single CDash build.<br />
<br />
Adopt some of these tips and techniques in your favorite CMake-based project:<br />
<br />
* LABELS is a new CMake/CTest property that applies to source files, targets and tests. Labels are sent to CDash inside the resulting xml files.<br />
* Use ctest_submit (PARTS) to do incremental submissions. Results are available for viewing on the dashboards sooner. Don't forget to use APPEND in your ctest_build calls when submitting by parts.<br />
* Use INCLUDE_LABEL with ctest_test to run only the tests with labels that match the regular expression.<br />
* Use CTEST_BUILD_TARGET to build your subprojects one at a time, submitting subproject dash boards along the way.</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=MastringCmakeVersion31:Chapter_10&diff=5609MastringCmakeVersion31:Chapter 102020-09-21T12:05:59Z<p>Onionmixer: CMAKE Chapter 10</p>
<hr />
<div>==CHAPTER TEN::AUTOMATION & TESTING WITH CMAKE==<br />
<br />
===Testing with CMake, CTest, and CDash===<br />
<br />
Testing is a key tool for producing and maintaining robust, valid software. This chapter will examine the tools that are part of CMake to support software testing. We will begin with a brief discussion of testing approaches, and then discuss how to add tests to your software project using CMake. Finally we will look at additional tools that support creating centralized software status dashboards.<br />
<br />
The tests for a software package may take a number of forms. At the most basic level there are smoke tests, such as one that simply verifies that the software compiles. While this may seem like a simple test, with the wide variety of platforms and configurations available, smoke tests catch more problems than any other type of test. Another form of smoke test is to verify that a test runs without crashing. This can be handy for situations where the developer does not want to spend the time creating more complex tests, but is willing to run some simple tests. Most of the time these simple tests can be small example programs. Running them verifies not only that the build was successful, but that any required shared libraries can be loaded (for projects that use them), and that at least some of the code can be executed without crashing.<br />
<br />
Moving beyond basic smoke tests leads to more specific tests such as regression, black-, and white-box testing. Each of these has its strengths. Regression testing verifies that the results of a test do not change over time or platform. This is very useful when performed frequently, as it provides a quick check that the behavior and results of the software have not changed. When a regression test fails, a quick look at recent code changes can usually identify the culprit. Unfortunately, regression tests typically require more effort to create than other tests.<br />
<br />
White- and black-box testing refer to tests written to exercise units of code (at various levels of integration), with and without knowledge of how those units are implemented respectively. White-box testing is designed to stress potential failure points in the code knowing how that code was written, and hence its weaknesses. As with regression testing, this can take a substantial amount of effort to create good tests. Black-box testing typically knows little or nothing about the implementation of the software other than its public API. Black-box testing can provide a lot of code coverage without too much effort in developing the tests. This is especially true for libraries of object oriented software where the APis are well defined. A black-box test can be written to go through and invoke a number of typical methods on all the classes in the software.<br />
<br />
The final type of testing we will discuss is software standard compliance testing. While the other test types we have discussed are focused on determining if the code works properly, compliance testing tries to determine if the code adheres to the coding standards of the software project. This could be a check to verify that all classes have implemented some key method, or that all functions have a common prefix. The options for this type of test are limitless and there are a number of ways to perform such testing. There are software analysis tools that can be used, or specialized test programs (maybe python scripts etc) could be written. The key point to realize is that the tests do not necessarily have to involve running some part of the software. The tests might run some other tool on the source code itself.<br />
<br />
There are a number of reasons why it helps to have testing support integrated into the build process. First, complex software projects may have a number of configuration or platform-dependent options. The build system knows what options can be enabled and can then enable the appropriate tests for those options. For example, the Visualization Toolkit (VTK) includes support for a parallel processing library called MPI. If VTK is built with MPI support then additional tests are enabled that make use of MPI and verify that the MPI-specific code in VTK works as expected. Secondly, the build system knows where the executables will be placed, and it has tools for finding other required executables (such as perl, python etc). The third reason is that with UNIX Makefiles it is common to have a test target in the Makefile so that developers can type make test and have the test(s) run. In order for this to work, the build system must have some knowledge of the testing process.<br />
<br />
<br />
===How Does CMake Facilitate Testing?===<br />
<br />
CMake facilitates testing your software through special testing commands and the CTest executable. First, we will discuss the key testing commands in CMake. To add testing to a CMake-based project, simply include(CTest) (page 317) and use the add_test() (page 277) command. The add_test command has a simple syntax as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
add_test (NAME TestName COMMAND ExecutableToRun arg1 arg2 ...)<br />
</syntaxhighlight><br />
<br />
The first argument is simply a string name for the test. This is the name that will be displayed by testing<br />
programs. The second argument is the executable to run. The executable can be built as part of the project<br />
or it can be a standalone executable such as python, perl , etc. The remaining arguments will be passed to the<br />
running executable. A typical example of testing using the a dd_t e s t command would look l ike this:<br />
<br />
<syntaxhighlight lang="text"><br />
add_executable (TestInstantiator TestInstantiator.cxx)<br />
target_link_libraries (TestInstantiator vtkCommon)<br />
add_test (NAME TestInstantiator<br />
COMMAND TestInstantiator)<br />
</syntaxhighlight><br />
<br />
The add_test command is typically placed in the CMakeLists file for the directory that has the test in it. For large projects, there may be multiple CMakeLists files with add_test commands in them. Once the add_test commands are present in the project, the user can run the tests by invoking the "test" target of Makefile, or the RUN_TESTS target of Visual Studio or Xcode. An example of running tests on the CMake tests using the Makefile generator on Linux would be:<br />
<br />
<syntaxhighlight lang="text"><br />
$ make test<br />
Running tests...<br />
Test project<br />
Start 2: kwsys.testEncode<br />
1/20 Test #2: kwsys.testEncode .......... Passed 0.02 sec<br />
Start 3: kwsys.testTerminal<br />
2/20 Test #3: kwsys.testTerminal ........ Passed 0.02 sec<br />
Start 4: kwsys.testAutoPtr<br />
3/20 Test #4: kwsys.testAutoPtr ......... Passed 0.02 sec<br />
</syntaxhighlight><br />
<br />
<br />
===Additional Test Properties===<br />
<br />
By default a test passes if all of the following conditions are true:<br />
<br />
* The test executable was found<br />
* The test ran without exception<br />
* The test exited with return code 0<br />
<br />
That said, these behaviors can be modified using the set_property() (page 329) command:<br />
<br />
<syntaxhighlight lang="text"><br />
set_property (TEST test_name<br />
PROPERTY prop1 value1 value2 ...)<br />
</syntaxhighlight><br />
<br />
This command will set additional properties for the specified tests. Example properties are:<br />
<br />
'''ENVIRONMENT''' Specifies environment variables that should be defined for running a test. If set to a list of environment variables and values of the form MYVAR=value, those environment variables will be defined while the test is running. The environment is restored to its previous state after the test is done.<br />
<br />
'''LABELS''' Specifies a list of text labels associated with a test. These labels can be used to group tests together based on what they test. For example, you could add a label of MPI to all tests that exercise MPI code.<br />
<br />
'''WILL_FAIL''' If this option is set to true, then the test will pass if the return code is not 0, and fail if it is. This reverses the third condition of the pass requirements.<br />
<br />
'''PASS_REGULAR_EXPRESSION''' If this option is specified, then the output of the test is checked against the regular expression provided (a list of regular expressions may be passed in as well). If none of the regular expressions match, then the test will fail. If at least one of them m atches, then the test will pass.<br />
<br />
'''FAIL_REGULAR_EXPRESSION''' If this option is specified, then the output of the test is checked against the regular expression provided (a list of regular expressions may be passed in as well). If none of the regular expressions match, then the test will pass. If at least one of them matches, then the test will fail .<br />
<br />
If both PASS_REGULAR_EXPRESSION (page 614) and FAIL_REGULAR_EXPRESSION (page 613) are specified, then the FAIL_REGULAR_EXPRESSION takes precedence. The following example illustrates using the PASS_REGULAR_EXPRESSION and FAIL_REGULAR_EXPRESSION:<br />
<br />
<syntaxhighlight lang="text"><br />
add_test (NAME outputTest COMMAND outputTest)<br />
set (passRegex "^Test passed" "^*All ok")<br />
set (failRegex "Error" "Fail")<br />
<br />
set_property (TEST outputTest<br />
PROPERTY PASS_REGULAR_EXPRESSION "${passRegex}")<br />
set_property (TEST outputTest<br />
PROPERTY FAIL_REGULAR_EXPRESSION "${failRegex}")<br />
</syntaxhighlight><br />
<br />
<br />
===Testing Using CTest===<br />
<br />
When you run the tests from your build environment, what really happens is that the build environment runs CTest. CTest is an executable that comes with CMake; it handles running the tests for the project. While CTest works well with CMake, you do not have to use CMake in order to use CTest. The main input file for CTest is called CTestTestfile.cmake. This file will be created in each directory that was processed by CMake (typically every directory with a CMakeLists file). The syntax of CTestTestfile.cmake is like the regular CMake syntax, with a subset of the commands available. If CMake is used to generate testing files, they will list any subdirectories that need to be processed as well as any add_test() (page 277) calls. The subdirectories are those that were added by subdirs() (page 350) or add_subdirectory() (page 277) commands. CTest can then parse these files to determine what tests to run. An example of such a file is shown below:<br />
<br />
<syntaxhighlight lang="text"><br />
# CMake generated Testfile for<br />
# Source directory: C:/CMake<br />
# Build directory: C:/CMakeBin<br />
#<br />
# This file includes the relevent testing commands required<br />
# for testing this directory and lists subdirectories to<br />
# be tested as well.<br />
<br />
ADD_TEST (SystemInformationNew ...)<br />
<br />
SUBDIRS (Source/kwsys)<br />
SUBDIRS (Utilities/cmzlib)<br />
...<br />
</syntaxhighlight><br />
<br />
When CTest parses the CTestTestfile.cmake files, it will extract the list of tests from them. These tests will be run, and for each test CTest will display the name of the test and its status. Consider the following sample output:<br />
<br />
<syntaxhighlight lang="text"><br />
$ ctest<br />
Test project C:/CMake-build26<br />
Start 1: SystemInformat ionNew<br />
1/21 Test #1: SystemInformationNew ...... Passed 5.78 sec<br />
Start 2: kwsys.testEncode<br />
2/21 Test #2: kwsys.testEncode .......... Passed 0.02 sec<br />
Start 3: kwsys.testTerminal<br />
3/21 Test #3: kwsys.testTerminal ........ Passed 0.00 sec<br />
Start 4: kwsys.testAutoPtr<br />
4/21 Test #4: kwsys.testAutoPtr ......... Passed 0.02 sec<br />
Start 5: kwsys.testHashSTL<br />
5/21 Test #5: kwsys.testHashSTL ......... Passed 0.02 sec<br />
...<br />
100% tests passed, 0 tests failed out of 21<br />
Total Test time (real) = 59.22 sec<br />
</syntaxhighlight><br />
<br />
CTest is run from within your build tree. It will run all the tests found in the current directory as well as any subdirectories listed in the CTestTestfile.cmake. For each test that is run CTest will report if the test passed and how long it took to run the test.<br />
<br />
The CTest executable includes some handy command line options to make testing a little easier. We will start by looking at the options you would typically use from the command line.<br />
<br />
<syntaxhighlight lang="text"><br />
-R <regex> Run tests matching regular expression<br />
~E <regex> Exclude tests matching regular expression<br />
-L <regex> Run tests with labels matching the regex<br />
-LE <regex> Run tests with labels not matching regexp<br />
-C <config> Choose the configuration to test<br />
-V, --verbose Enable verbose output from tests.<br />
-N, --show-only Disable actual execution of tests.<br />
-I (Start,End,Stride,test#,test#|Test file]<br />
Run specific tests by range and number.<br />
-H Display a help message<br />
</syntaxhighlight><br />
<br />
The -R option is probably the most commonly used. It allows you to specify a regular expression; only the tests with names matching the regular expression will be run . Using the -R option with the name (or part of the name) of a test is a quick way to run a single test. The -E option is similar except that it excludes all tests matching the regular expression. The -L and -LE options are similar to -R and -E, except that they apply to test labels that were set using the set_property() (page 329) command as described in section 0. The -C option is mainly for IDE builds where you might have multiple configurations, such as Release and Debug in the same tree. The argument following the -C determines which configuration will be tested. The -V argument is useful when you are trying to determine why a test is failing. With -V, CTest will print out the command line used to run the test, as well as any output from the test itself. The -V option can be used with any invocation of CTest to provide more verbose output. The -N option is useful if you want to see what tests CTest would run without actually running them.<br />
<br />
Running the tests and making sure they all pass before committing any changes to the software is a sure-fire way to improve your software quality and development process. Unfortunately, for large projects the number of tests and the time required to run them may be prohibitive. In these situations the -I option of CTest can be used. The -I option allows you to flexibly specify a subset of the tests to run. For example, the following invocation of CTest will run every seventh test.<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -I ,,7<br />
</syntaxhighlight><br />
<br />
While this is not as good as running every test, it is better than not running any and it may be a more practical solution for many developers. Note that if the start and end arguments are not specified, as in this example, then they will default to the first and last tests. In another example, assume that you always want to run a few tests plus a subset of the others. In this case you can explicitly add those tests to the end of the arguments for -I . For example:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -I ,,5,1,2,3,10<br />
</syntaxhighlight><br />
<br />
will run tests 1,2,3, and 10, plus every fifth test. You can pass as many test numbers as you want after the stride argument.<br />
<br />
<br />
===Using CTest to Drive Complex Tests===<br />
<br />
Sometimes to properly test a project you need to actually compile code during the testing phase. There are several reasons for this. First, if test programs are compiled as part of the main project, they can end up taking up a significant amount of the build time. Also, if a test fails to build, the main build should not fail as well. Finally, IDE projects can quickly become too large to load and work with. The CTest command supports a group of command line options that allow it to be used as the test executable to run. When used as the test executable, CTest can run CMake, run the compile step, and finally run a compiled test. We will now look at the command line options to CTest that support bui lding and running tests.<br />
<br />
<syntaxhighlight lang="text"><br />
--build-and-test src_directory build_directory<br />
Run cmake on the given source directory using the specified build directory.<br />
--test-command Name of the program to run.<br />
--build-target Specify a specific target to build.<br />
--build-nocmake Run the build without running cmake first.<br />
--build-run-dir Specify directory to run programs from.<br />
--build-two-config Run cmake twice before the build.<br />
--build-exe-dir Specify the directory for the executable.<br />
--build-generator Specify the generator to use.<br />
--build-project Specify the name of the project to build.<br />
--build-makeprogram Specify the make program to use.<br />
--build-noclean Skip the make clean step,<br />
--build-options Add extra options to the build step.<br />
</syntaxhighlight><br />
<br />
For an example, consider the following add_test() (page 277) command taken from the CMakeLists.txt file of CMake itself. It shows how CTest can be used both to compile and run a test.<br />
<br />
<syntaxhighlight lang="text"><br />
add_test (simple ${CMAKE_CTEST_COMMAND }<br />
--build-and-test "${CMAKE_SOURCE_DIR}/Tests/Simple"<br />
"${CMAKE BINARY_DIR}/Tests/Simple"<br />
--build-generator ${CMAKE_GENERATOR}<br />
--build-makeprogram ${CMAKE_MAKE_PROGRAM}<br />
--build-project Simple<br />
--test-command simple)<br />
</syntaxhighlight><br />
<br />
In this example, the add_test command is first passed the name of the test, "simple". After the name of the test, the command to be run is specified, In this case, the test command to be run is CTest. The CTest command is referenced via the CMAKE_CTEST_COMMAND (page 626) variable. This variable is always set by CMake to the CTest command that came from the CMake installation used to build the project. Next, the source and binary directories are specified. The next options to CTest are the -build-generator and -build-makeprogram options. These are specified using the CMake variables CMAKE_MAKE_PROGRAM (page 630) and CMAKE_GENERATOR (page 628). Both CMAKE_MAKE_PROGRAM and CMAKE_GENERATOR are defined by CMake. This is an important step as it makes sure that the same generator is used for building the test as was used for building the project itself. The -build-project option is passed Simple, which corresponds to the project() (page 327) command used in the Simple test. The final argument is the -test-command which tells CTest the command to run once it gets a successful build, and should be the name of the executable that will be compiled by the test.<br />
<br />
<br />
===Handling a Large Number of Tests===<br />
<br />
When a large number of tests exist in a single project, it is cumbersome to have individual executables available for each test. That said, the developer of the project should not be required to create tests with complex argument parsing. This is why CMake provides a convenience command for creating a test driver program. This command is called create_test_sourcelist() (page 282). A test driver is a program that links together many small tests into a single executable. This is useful when building static executables with large libraries to shrink the total required size. The signature for create_test_sourcelist is as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
create_test_sourcelist (SourceListName<br />
DriverName<br />
test1 test2 test3<br />
EXTRA_INCLUDE include.h<br />
FUNCTION function<br />
)<br />
</syntaxhighlight><br />
<br />
The first argument is the variable which will contain the list of source files that must be compiled to make the test executable. The DriverName is the name of the test driver program (e.g. the name of the resulting executable). The rest of the arguments consist of a list of test source files. Each test source file should have a function in it that has the same name as the file with no extension (foo.cxx should have int foo(argc,argv);). The resulting executable will be able to invoke each of the tests by name on the command line. The EXTRA_INCLUDE and FUNCTION arguments support additional customization of the test driver program. Consider the following CMakeLists file fragment to see how this command can be used:<br />
<br />
<syntaxhighlight lang="text"><br />
# create the testing file and list of tests<br />
create_test_sourcelist (Tests<br />
CommonCxxTests.cxx<br />
ObjectFactory.cxx<br />
otherArrays.cxx<br />
otherEmptyCell.cxx<br />
TestSmartPointer.cxx<br />
SystemInformation.cxx<br />
)<br />
<br />
# add the executable<br />
add_executable (CommoncCxxTests ${Tests})<br />
<br />
# remove the test driver source file<br />
set (TestsToRun ${Tests})<br />
remove (TestsToRun CommonCxxTests.cxx)<br />
<br />
# Add all the ADD_TEST for each test<br />
foreach (test ${TestsToRun})<br />
get_filename_component (TName ${test} NAME_WE)<br />
add_test (NAME ${TName} COMMAND CommonCxxTests ${TName})<br />
endforeach ()<br />
</syntaxhighlight><br />
<br />
The create_test_sourcelist command is invoked to create a test driver. In this case it creates and writes CommonCxxTests.cxx into the binary tree of the project, using the rest of the arguments to determine its contents. Next, the add_executable() (page 273) command is used to add that executable to the build. Then a new variable called TestsToRun is created with an initial value of the sources required for the test driver. The remove() (page 349) command is used to remove the driver program itself from the list. Then, a foreach() (page 309) command is used to loop over the remaining sources. For each source, its name without a file extension is extracted and put in the variable TName, then a new test is added for TName. The end result is that for each source file in the create_test_sourcelist an add_test command is called with the name of the test. As more tests are added to the create_test_sourcelist command, the foreach loop will automatically call add_test for each one.<br />
<br />
<br />
===Managing Test Data===<br />
<br />
In addition to handling large numbers of tests, CMake contains a system for managing test data. It is en capsulated in an ExtemalData CMake module, downloads large data on an as-needed basis, retains version information, and allows distributed storage.<br />
<br />
The design of the ExtemalData follows that of distributed version control systems using hash-based file identifiers and object stores, but it also takes advantage of the presence of a dependency-based build system. The figure below illustrates the approach. Source trees contain lighweight "content links" referencing data in remote storage by hashes of their content. The ExtemalData module produces build rules to download the data to local stores and reference them from build trees by symbolic links (copies on Windows).<br />
<br />
A content link is a small, plain text file containing a hash of the real data. Its name is the same as its data file, with an additional extension identifying the hash algorithm e.g. img.png.md5. Content links always take the same (small) amount of space in the source tree regardless of the real data size. The CMakeLists.txt CMake configuration files refer to data using a DATA{} syntax inside calls to the ExternalData module API For example, DATA{img.png} tells the ExternalData module to make img.png available in the build tree even if only a img.png.md5 content link appears in the source tree.<br />
<br />
<<Figure 10.1: ExternalData module flow chart>><br />
<br />
The ExternalData module implements a flexible system to prevent duplication of content fetching and storage. Objects are retrieved from a list of (possibly redundant) local and remote locations specified in the ExtemalData CMake configuration as a list of "URL templates". The only requirement of remote storage systems is the ability to fetch from a URL that locates content through specification of the hash algorithm and hash value. Local or networked file systems, an Apache FTP server or a Midas<ref>http://www.midasplatform.org</ref> server, for example, all have this capability. Each URL template has %(algo) and %(hash) placeholders for ExternalData to replace with values from a content link.<br />
<br />
A persistent local object store can cache downloaded content to share among build trees by setting the ExternalData_OBJECT_STORES CMake build configuration variable. This is helpful to de-duplicate content for multiple build trees. It also resolves an important pragmatic concern in a regression testing context; when many machines simultaneously start a nightly dashboard build, they can use their local object store instead of overloading the data servers and flooding network traffic.<br />
<br />
Retrieval is integrated with a dependency-based build system, so resources are fetched only when needed. For example, if the system is used to retrieve testing data and BUILD_TESTING is OFF, the data are not retrieved unnecessarily. When the source tree is updated and a content link changes, the build system fetches the new data as needed.<br />
<br />
Since all references leaving the source tree go through hashes, they do not depend on any external state. Remote and local object stores can be relocated without invalidating content links in older versions of the source code. Content links within a source tree can be relocated or renamed without modifying the object stores. Duplicate content links can exist in a source tree, but download will only occur once. Multiple versions of data with the same source tree file name in a project's history are uniquely identified in the object stores.<br />
<br />
Hash-based systems allow the use of untrusted connections to remote resources because downloaded content is verified after it is retrieved. Configuration of the URL templates list improves robustness by allowing multiple redundant remote storage resources. Storage resources can also change over time on an as-needed basis. If a project's remote storage moves over time, a build of older source code versions is always possible by adjusting the URL templates configured for the build tree or by manually populating a local object store.<br />
<br />
A simple application of the ExternalData module looks like the following:<br />
<br />
<syntaxhighlight lang="text"><br />
include (ExternalData)<br />
set (midas "http://midas.kitware.com/MyProject")<br />
<br />
<br />
# Add standard remote object stores to user's<br />
# configuration.<br />
list (APPEND ExternalData_URL_ TEMPLATES<br />
"${midas} ?algorithm=%(algo)&hash=%(hash)"<br />
"ftp://myproject.org/files/%(algo)/%(hash)"<br />
)<br />
# Add a test referencing data.<br />
ExternalData_Add, Test (MyProjectData<br />
NAME SmoothingTest<br />
COMMAND SmoothingExe DATA{Input/Image.png}<br />
SmoothedImage.png<br />
)<br />
# Add a build target to populate the real data.<br />
ExternalData_Add_Target (MyProjectData)<br />
</syntaxhighlight><br />
<br />
The ExternalData_Add_Test function is a wrapper around CMake's add_test command. The source tree is probed for a Input/Image.png.md5 content link containing the data's MD5 hash. After checking the local object store, a request is made sequentially to each URL in the ExternalData_URL_TEMPLATES list with the data's hash. Once found, a symlink is created in the build tree. The DATA{Input/Image.png} path will expand to the build tree path in the test command line. Data are retrieved when the MyProjectData target is built.<br />
<br />
<br />
===Producing Test Dash boards===<br />
<br />
As your project's testing needs grow, keeping track of the test results can become overwhelming. This is especially true for projects that are tested nightly on a number of different platforms. In these cases, we recommend using a test dashboard to summarize the test results. (see Figure 10.2)<br />
<br />
A test dashboard summarizes the results for many tests on many platforms, and its hyperlinks allow people to drill down into additional levels of detail quickly. The CTest executable includes support for producing test dashboards. When run with the correct options, CTest will produce XML-based output recording the build and test results, and post them to a dashboard server. The dashboard server runs an open source software package called CDash. CDash collects the XML results and produces HTML web pages from them.<br />
<br />
Before discussing how to use CTest to produce a dashboard, let us consider the main parts of a testing dashboard. Each night at a specified time, the dashboard server will open up a new dashboard so each day there is a new web page showing the results of tests for that twenty-four hour period. There are links on the main page that allow you to quickly navigate through different days. Looking at the main page for a project (such as CMake's dashboard off of www.cmake.org), you will see that it is divided into a few main components. Near the top you will find a set of links that allow you to step to previous dashboards, as well as links to project pages such as the bug tracker, documentation, etc.<br />
<br />
<<Figure 10.2: Sample Testing Dashboard>><br />
<br />
Below that, you will find groups of results. Typically groups that you will find include Nightly, Experimental, Continuous, Coverage, and Dynamic Analysis (see Figure 10.3). The category into which a dashboard entry will be placed depends on how it was generated. The simplest are Experimental entries which represent dashboard results for someone's current copy of the project's source code. With an experimental dashboard, the source code is not guaranteed to be up to date. In contrast a Nightly dashboard entry is one where CTest tries to update the source code to a specific date and time. The expectation is that al l nightly dashboard entries for a given day should be based on the same source code.<br />
<br />
<<Figure 10.3: Experimental, Coverage, and Dynamic Analysis Results>><br />
<br />
A continuous dashboard entry is one that is designed to run every time new files are checked in. Depending on how frequently new files are checked in a single day's dashboard could have many continuous entries. Continuous dashboards are particularly helpful for cross platform projects where a problem may only show up on some platforms. In those cases a developer can commit a change that works for them on their platform and then another platform running a continuous build could catch the error, allowing the developer to correct the problem promptly.<br />
<br />
Dynamic Analysis and Coverage dashboards are designed to test the memory safety and code coverage of a project. A Dynamic Analysis dashboard entry is one where all the tests are run with a memory access/leak checking program enabled. Any resulting errors or warnings are parsed, summarized and displayed. This is important to verify that your software is not leaking memory, or reading from uninitialized memory. Coverage dashboard entries are similar in that all the tests are run, but as they are the lines of code being executed are tracked. When all the tests have been run, a listing of how many times each line of code was executed is produced and displayed on the dashboard.<br />
<br />
<br />
====Adding CDash Dashboard Support to a Project====<br />
<br />
In this section we show how to submit results to the CDash dashboard. You can either use the Kitware CDash servers at my.cdash.org or you can setup your own CDash server as described in section 10.11. If you are using my.cdash.org, you can click on the "Start My Project" button which will ask you to create an account (or login if you already have one), and then bring you to a page to start creating your project. If you have installed your own CDash server, then you should login to your CDash server as administrator and select "Create New Project" from the administration panel. Regardless of which approach you use, the next few steps will be to fill in information about your project as shown in Figure 10.4. Many of the items below are optional, so do not be concerned if you do not have a value for them; just leave them empty if they don't apply.<br />
<br />
<<Figure 10.4: Creating a new project in CDash>><br />
<br />
'''Name:''' what you want to call the project.<br />
<br />
'''Description:''' description of the project to be shown on the first page.<br />
<br />
'''Home URL:''' home URL of the project to appear in the main menu of the dashboard.<br />
<br />
'''Bug Tracker URL :''' URL to the bug tracker. Currently CDash supports Mantis<ref>http://www.mantisbt.org/</ref>, and if a bug is entered in the repository with the message "BUG: 132456", CDash will automatically link to the appropriate bug.<br />
<br />
'''Documentation URL:''' URL to where the project's documentation is kept. This will appear in the main menu of the dashboard.<br />
<br />
'''Public Dashboard:''' if checked, the dashboard is public and anybody can see the results of the dash board. If unchecked, only users assigned to this project can access the dashboard.<br />
<br />
'''Logo:''' logo of the project to be displayed on the main dashboard. Optimal size for a logo is 100x100 pixels. Transparent GIFs work best as they can blend in with the CDash background.<br />
<br />
'''Repository Viewer URL:''' URL of the web repository browser. CDash currently supports: ViewCVS, Trac, Fisheye, ViewVC, WebSVN, Loggerhead, GitHub, gitweb, hgweb, and others. Some example URLs are: * http://public.kitware.com/cgi-bin/viewcvs.cgi/?cvsroot=CMake (for ViewVC) * https://www.kitware.com/websvn/listing.php?repname=MyRepository (for WebSVN)<br />
<br />
'''Repositories:''' in order to display the daily updates, CDash gets a diff version of the modified files. Current CDash supports only anonymous repository access. A typical URL is :pserver:anoncvs@myproject.org:/cvsroot/MyProject.<br />
<br />
'''Nightly Start Time:''' CDash displays the current dashboard using a 24 hour window. The nightly start time defines the beginning of this window. Note that the start time is expressed in the form HH:MM:SS TZ, e.g. 01:00:00 UTC. It is recommended to express the nightly start time in UTC to keep operations runnjng smoothly across the boundaries of local time changes, like moving to or from daylight saving time.<br />
<br />
'''Coverage Threshold:''' CDash marks that coverage has passed (green) if the global coverage for a build or specific files is above this threshold. It is recommended to set the coverage threshold to a high value and decrease it as you focus on improving your coverage. Enable Test Timing: enable/disable test timing for this project. See "Test timing" in the next section for more information.<br />
<br />
'''Test Time Standard Deviation:''' set a multiplier for the standard deviation of a test time. If the time for a test is higher than the mean + multiplier * standard deviation, the test time status is marked as failed. The default value is 4 if not specified. Note that changing this value does not affect previous builds; only builds submitted after the modification .<br />
<br />
'''Test Time Standard Deviation Threshold:''' set a minimum standard deviation for a test time. If the current standard deviation for a test is lower than this threshold, then the threshold is used instead. This is particularly important for tests that have a very low standard deviation, but still some variability. The default threshold is set to 2 if not specified. Note that changing this value does not affect previous builds, only builds submitted after the modification.<br />
<br />
'''Test Time # Max Failures Before Flag:''' some tests might take longer from one day to another depending on the client machine load. This variable defines the number of times a test should fail because of timing issues before being flagged.<br />
<br />
'''Email Submission Failures:''' enable/disable sending email when a build fails (configure, error, warnings,<br />
update, and test failings) for this project. This is a general feature.<br />
<br />
<br />
'''Email Redundant Failure:''' by default CDash does not send email for the same failures. For instance, if a build continues to fail over time, only one email would be sent. If the email redundant failures is checked, then CDash will send an email every time a build has a failure. CDash.<br />
<br />
'''Email Build Missing:''' enable/disable sending email when a build has not submitted. Email Low Coverage: enable/disable sending email when the coverage for files is lower than the default threshold value specified above.<br />
<br />
'''Email Test Timing Changed:''' enable/disable sending email when a test's timing has changed. Maximum Number of ltems in Email: dictates how many failures should be sent in an email. Maximum Number of Characters in Email: dictates how many characters from the log should be sent in the email.<br />
<br />
'''Google Analytics Tracker:''' CDash supports visitor tracking through Google analytics. See "Adding Google Analytics" for more information.<br />
<br />
'''Show Site IP Addresses:''' enable/disable the display of IP addresses of the sites submitting to this project. Display Labels: as of CDash 1.4, and CTest 2.8, labels can be attached to various build and test results. If checked, these labels are displayed on applicable CDash pages.<br />
<br />
'''AutoRemove Timeframe:''' set the number of days to retain results for this project. If the timeframe is less than 2 days, CDash will not remove any builds. AutoRemove Max Builds: set the maximum number of builds to remove when performing the auto removal of builds.<br />
<br />
<br />
After providing this information, you can click on "Create Project" to create the project in CDash. At this point the server is ready to accept dashboard submissions. The next step is to provide the dashboard server information to your software project. This information is kept in a file named CTestConfg.cmake at the top level of your source tree. You can download this file by clicking on the "Edit Project" button for your dashboard (it looks like a pie chart with a wrench underneath it), then clicking on the miscellaneous tab and selecting "Download CTestConfig", and then saving the CTestConfig.cmake in your source tree. In the next section, we review this file in more detail .<br />
<br />
<br />
====Client Setup====<br />
<br />
To support dashboards in your project you need to include the CTest module as follows.<br />
<br />
<syntaxhighlight lang="text"><br />
# Include CDash dashboard testing module<br />
include (CTest)<br />
</syntaxhighlight><br />
<br />
The CTest module will then read settings from the CTestConfig.cmake file you downloaded from CDash. If you have added add_test() (page 277) command calls to your project creating a dashboard entry is as simple as running:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -D Experimental<br />
</syntaxhighlight><br />
<br />
The -D option tells CTest to create a dashboard entry. The next argument indicates what type of dashboard entry to create. Creating a dashboard entry involves quite a few steps that can be run independently, or as one command. In this example, the Experimental argument will cause CTest to perform a number of different steps as one command. The different steps of creating a dashboard entry are summarized below.<br />
<br />
'''Start''' Prepare a new dashboard entry. This creates a Testing subdirectory in the build directory. The Testing subdirectory will contain a subdirectory for the dashboard results with a name that corresponds to the dashboard time. The Testing subdirectory will also contain a subdirectory for the temporary testing results called Temporary.<br />
<br />
'''Update''' Perform a source control update of the source code (typically used for nightly or continuous runs). Currently CTest supports Concurrent Versions System (CVS), Subversion, Git, Mercurial, and Bazaar.<br />
<br />
'''Configure''' Run CMake on the project to make sure the Makefiles or project files are up to date.<br />
<br />
'''Build''' Build the software using the specified generator.<br />
<br />
'''Test''' Run all the tests and record the results.<br />
<br />
'''MemoryCheck''' Perform memory checks using Purify or valgrind.<br />
<br />
'''Coverage''' Collect source code coverage information using gcov or Bullseye.<br />
<br />
'''Submit''' Submit the testing results as a dashboard entry to the server.<br />
<br />
Each of these steps can be run independently for a Nightly or Experimental entry using the following syntax:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -D NightlyStart<br />
ctest -D NightlyBuild<br />
ctest -D NightlyCoverge -D NightlySubmit<br />
</syntaxhighlight><br />
<br />
or<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -D ExperimentalStart<br />
ctest -D ExperimentalConfigure<br />
ctest -D ExperimentalCoverge -D Experimentalsubmit<br />
</syntaxhighlight><br />
<br />
Alternatively, you can use shortcuts that perform the most common combinations all at once. The shortcuts that CTest has defined include:<br />
<br />
'''ctest -D Experimental''' performs the start, configure, build, test, coverage, and submit commands.<br />
<br />
'''ctest -D Nightly''' performs the start, update, configure, build, test, coverage, and submit commands.<br />
<br />
'''ctest -D Continuous''' performs the start, update, configure, build, test, coverage, and submit commands.<br />
<br />
'''ctest -D MemoryCheck''' performs the start, configure, build, memorycheck, coverage, and submit commands.<br />
<br />
When first setting up a dashboard it is often useful to combine the -D option with the -V option. This will allow you to see the output of all the different stages of the dashboard process. Likewise, CTest maintains log files in the Testing/Temporary directory it creates in your binary tree. There you will find log files for the most recent dashboard run. The dashboard results (XML files) are stored in the Testing directory as well.<br />
<br />
<br />
===Customizing Dash boards for a Project===<br />
<br />
CTest has a few options that can be used to control how it processes a project. If, when CTest runs a dash board, it finds CTestCustom.ctest files in the binary tree, it will load these files and use the settings from them to control its behavior. The syntax of a CTestCustom file is the same as regular CMake syntax. That said, only set commands are normally used in this file. These commands specify properties that CTest will consider when performing the testing.<br />
<br />
<br />
====Dashboard Submissions Settings====<br />
<br />
A number of the basic dashboard settings are provided in the file that you download from CDash. You can edit these initial values and provide additional values if you wish. The first value that is set is the nightly start time. This is the time that dashboards all around the world will use for checking out their copy of the nightly source code. This time also controls how dashboard submissions will be grouped together. All submissions from the nightly start time until the next nightly start time will be included on the same "day".<br />
<br />
<syntaxhighlight lang="text"><br />
# Dashboard is opened for submissions for a 24 hour period<br />
# starting at the specified NIGHTLY_START_TIME. Time is<br />
# specified in 24 hour format.<br />
set (CTEST_NIGHTLY_START_TIME "01:00:00 UTC")<br />
</syntaxhighlight><br />
<br />
The next group of settings control where to submit the testing results. This is the location of the CDash server.<br />
<br />
<syntaxhighlight lang="text"><br />
# CDash server to submit results (used by client)<br />
set (CTEST_DROP_METHOD http)<br />
set (CTEST_DROP_SITE "my.cdash.org")<br />
set (CTEST_DROP_LOCATION "/submit .php?project=KensTest")<br />
set (CTEST_DROP_SITE_CDASH TRUE)<br />
</syntaxhighlight><br />
<br />
The CTEST_DROP_SITE (page 678) specifies the location of the CDash server. Build and test results generated by CDash clients are sent to this location. The CTEST_DROP_LOCATION (page 678) is the directory or the HTTP URL on the server where CDash clients leave their build and test reports. The CTEST_DROP_SITE_CDASH (page 678) specifies that the current server is CDash, which prevents CTest from trying to "trigger" the submission (this is still done if this variable is not set to allow for backwards compatibility with Dart and Dart 2).<br />
<br />
Currently CDash supports only the HTTP drop submission method; however CTest supports other submission types. The CTEST_DROP_METHOD (page 678) specifies the method used to submit testing results. The most common setting for this will be HTTP which uses the Hyper Text Transfer Protocol (HTTP) to transfer the test data to the server. Other drop methods are supported for special cases such as FTP and SCP. In the example below, clients that are submitting their results using the HTTP protocol use a web address as their drop site. If the submission is via FTP, this location is relative to where the CTEST_DROP_SITE_USER (page 678) will log in b y default. The CTEST_DROP_SITE_USER specifies the FTP username the client will use on the server. For FTP submissions this user will typically be "anonymous". However, any username that can communicate with the server can be used. For FTP servers that require a password, it can be stored in the CTEST_DROP_SITE_PASSWORD (page 678) variable. The CTEST_DROP_SITE_MODE (not used in this example) is an optional variable that you can use to specify the FTP mode. Most FTP servers will handle the default passive mode, but you can set the mode explicitly to active if your server does not.<br />
<br />
CTest can also be run from behind a firewall. If the firewall allows FTP or HITP traffic, then no additional settings are required. If the firewall requires an FTP/HITP proxy or uses a SOCKS4 or SOCKS5 type proxy, some environment variables need to be set. HTTP_PROXY and FTP_PROXY specify the servers that service HTTP and FTP proxy requests. HTTP_PROXY_PORT and FTP_PROXY_PORT specify the port on which the HTTP and FTP proxies reside. HTTP_PROXY_TYPE specifies the type of the HITP proxy used. The three different types of proxies supported are the default, which incl udes a generic HTTP/FTP proxy, "SOCKS4", and "SOCKS5", which specify SOCKS4 and SOCKS5 compatible proxies.<br />
<br />
<br />
====Filtering Errors and Warnings====<br />
<br />
By default, CTest has a list of regular expressions that it matches for finding the errors and warnings from the output of the build process. You can override these settings in your CTestCustom.ctest files using several variables as shown below.<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_CUSTOM_WARNING_MATCH<br />
${CTEST_CUSTOM_WARNING_ MATCH}<br />
"{standard input}:[0-9][0-9]*: Warning: "<br />
)<br />
<br />
set (CTEST_CUSTOM_WARNING_EXCEPTION<br />
${CTEST_CUSTOM_WARNING_EXCEPTION}<br />
"tk8.4.5/[^/]+/[^/]+.c[:\"]"<br />
"xtree.[0-9]+. : warning C4702: unreachable code"<br />
"warning LNK4221"<br />
"variable .var_args[2]*. is used before its value is set"<br />
"jobserver unavailable"<br />
)<br />
</syntaxhighlight><br />
<br />
Another useful feature of the CTestCustom files is that you can use it to limit the tests that are run for memory checking dashboards. Memory checking using purify or valgrind is a CPU intensive process that can take twenty hours for a dashboard that normally takes one hour. To help alleviate this problem, CTest allows you to exclude some of the tests from the memory checking process as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_CUSTOM_MEMCHECK_IGNORE<br />
${CTEST_CUSTOM_MEMCHECK_IGNORE}<br />
TestSetGet<br />
otherPrint-ParaView<br />
Example-vtkLocal<br />
Example-vtkMy<br />
)<br />
</syntaxhighlight><br />
<br />
The format for excluding tests is simply a list of test names as specified when the tests were added in your CMakeLists file with add_test() (page 277).<br />
<br />
In addition to the demonstrated settings, such as CTEST_CUSTOM_WARNING_MATCH, CTEST_CUSTOM_WARNING_EXCEPTION, and CTEST_CUSTOM_MEMCHECK_IGNORE, CTest also checks several other variables.<br />
<br />
'''CTEST_CUSTOM_ERROR_MATCH''' Additional regular expressions to consider a build line as an error line<br />
<br />
'''CTEST_CUSTOM_ERROR_EXCEPTION''' Additional regular expressions to consider a build line not as an error line<br />
<br />
'''CTEST_CUSTOM_WARNING_MATCH''' Additional regular expressions to consider a build line as a warning line<br />
<br />
'''CTEST_CUSTOM_WARNING_EXCEPTION''' Additional regular expressions to consider a build line not as a warning line<br />
<br />
'''CTEST_CUSTOM_MAXIMUM_NUMBER_OF_ERRORS''' Maximum number of errors before CTest stops reporting errors (default 50)<br />
<br />
'''CTEST_CUSTOM_MAXIMUM_NUMBER_OF_WARNINGS''' Maximum number of warnings before CTest stops reporting warnings (default 50)<br />
<br />
'''CTEST_CUSTOM_COVERAGE_EXCLUDE''' Regular expressions for files to be excluded from the coverage analysis<br />
<br />
'''CTEST_CUSTOM_PRE_MEMCHECK''' List of commands to execute before performing memory checking<br />
<br />
'''CTEST_CUSTOM_POST_MEMCHECK''' List of commands to execute after performing memory checking<br />
<br />
'''CTEST_CUSTOM_MEMCHECK_IGNORE''' List of tests to exclude from the memory checking step<br />
<br />
'''CTEST_CUSTOM_PRE_TEST''' List of commands to execute before performing testing<br />
<br />
'''CTEST_CUSTOM_POST_TEST''' List of commands to execute after performing testing<br />
<br />
'''CTEST_CUSTOM_TESTS_IGNORE''' List of tests to exclude from the testing step<br />
<br />
'''CTEST_CUSTOM_MAXIMUM_PASSED_TEST_OUTPUT_SIZE''' Maximum size of test output for the passed test (default 1k)<br />
<br />
'''CTEST_CUSTOM_MAXIMUM_FAILED_TEST_OUTPUT_SIZE''' Maximum size of test output for the failed test (default 300k)<br />
<br />
Commands specified in CTEST_CUSTOM_PRE_TEST and CTEST_CUSTOM_POST_TEST, as well as the<br />
equivalent memory checki ng ones, are executed once per CTest run. These commands can be used, for<br />
example, if al l tests require some initial setup and some final cleanup to be performed.<br />
<br />
<br />
====Adding Notes to a Dash board====<br />
<br />
CTest and CDash support adding note files to a dashboard submission. These will appear on the dashboard as a clickable icon that links to the text of all the files. To add notes, call CTest with the -A option followed by a semicolon-separated list of filenames. The contents of these files will be submitted as notes for the dashboard. For example:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -D Continous -A C:/MyNotes.txt;C:/OtherNotes.txt<br />
</syntaxhighlight><br />
<br />
Another way to submit notes with a dashboard is to copy or write the notes as files into a Notes directory under the Testing directory of your binary tree. Any files found there when CTest submits a dashboard will also be uploaded as notes.<br />
<br />
<br />
===Setting up Automated Dashboard Clients===<br />
<br />
'''IMPORTANT:''' This section is obsolete, and left in only for reference. To setup new dashboards, please skip ahead to the next section, and write an "advanced ctest script" instead of following the directions in this section.<br />
<br />
CTest has a built-in scripting mode to help make the process of setting up dashboard clients even easier. CTest scripts will handle most of the common tasks and options that CTest -D Nightly does not. The dashboard script is written using CMake syntax and mainly involves setting up different variables or options, or creating an elaborate procedure, depending on the complexity of testing. Once you have written the script you can run the nightly dashboard as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -S myScript.cmake<br />
</syntaxhighlight><br />
<br />
First we will consider the most basic script you can use, and then we will cover the different options you can make use of. There are four variables that you must always set in your scripts. The first two variables are the names of the source and binary directories on disk, CTEST_SOURCE_DIRECTORY (page 680) and CTEST_BINARY_DIRECTORY (page 675). These should be fully specified paths. The next variable, CTEST_COMMAND, specifies which CTest command to use for running the dashboard. This may seem a bit confusing at first. The -S option of CTest is provided to do all the setup and customization for a dashboard, but the actual running of the dashboard is done with another invocation of CTest -D. Basically once the CTest script has done what it needs to do to setup the dashboard, it invokes CTest -D to actually generate the results. You can adjust the value of CTEST_COMMAND to control what type of dashboard to generate (Nightly, Experimental, Continuous), as well as to pass other options to the internal CTest process such as -I,,7 to run every 7th test. To refer to the CTest that is running the script, use the variable: CTEST_EXECUTABLE_NAME. The last required variable is CTEST_CMAKE_COMMAND, which specifies the full path to the cmake executable that will be used to configure the dashboard. To refer to the CMake command that corresponds to the CTest command running the script, use the variable: CMAKE_EXECUTABLE_NAME. The CTest script does an initial configuration with cmake in order to generate the CTestConfig. cmake file that CTest will use for the dashboard. The fol lowing example demonstrates the use of these four variables and is an example of the simplest script you can have.<br />
<br />
<syntaxhighlight lang="text"><br />
# these are the source and binary directories on disk<br />
set (CTEST_SOURCE_DIRECTORY C:/martink/test/CMake)<br />
set (CTEST_BINARY_DIRECTORY C:/martink/test/CMakeBin)<br />
<br />
# which CTest command to use for running the dashboard<br />
set (CTEST COMMAND<br />
"\"${CTEST_EXECUTABLE_NAME}\" -D Nightly"<br />
)<br />
<br />
# what CMake command to use for configuring this dashboard<br />
set (CTEST_CMAKE COMMAND<br />
"\"${CMAKE_EXECUTABLE_NAME}\""<br />
)<br />
</syntaxhighlight><br />
<br />
The script above is not that different to running CTest -D from the command line yourself. All it adds is that it verifies that the binary directory exists and creates it if it does not. Where CTest scripting really shines is in the optional features it supports. We will consider these options one by one, starting with one of the most commonly used options CTEST_START_WITH_EMPTY_BINARY_DIRECTORY. When this variable is set to true, it will delete the binary directory and then recreate it as an empty directory prior to running the dashboard. This guarantees that you are testing a clean build every time the dashboard is run. To use this option you simply set it in your script. In the example above we would simply add the following lines:<br />
<br />
<syntaxhighlight lang="text"><br />
# should CTest wipe the binary tree before running<br />
set (CTEST_START_WITH_EMPTY_BINARY_DIRECTORY TRUE)<br />
</syntaxhighlight><br />
<br />
Another commonly used option is the CTEST_INITIAL_CACHE variable. Whatever values you set this to will be written into the CMakeCache file prior to running the dashboard. This is an effective and simple way to initialize a cache with some preset values. The syntax is the same as what is in the cache with the exception that you must escape any quotes. Consider the following example:<br />
<br />
<syntaxhighlight lang="text"><br />
# this is the initial cache to use for the binary tree, be<br />
# careful to escape any quotes inside of this string<br />
set (CTEST_INITIAL_CACHE "<br />
<br />
//Command used to build entire project from the command line.<br />
MAKECOMMAND:STRING=\"devenv.com\" CMake.sln /build Debug /project ALL_BUILD<br />
<br />
//make program<br />
CMAKE_MAKE_PROGRAM:FILEPATH=C:/PROGRA~1/MICROS~1.NET/Common7/IDE/devenv.com<br />
<br />
//Name of generator.<br />
CMAKE_GENERATOR:INTERNAL=Visual Studio 7 .NET 2003<br />
<br />
//Path to a program.<br />
CVSCOMMAND:FILEPATH=C:/cygwin/bin/cvs.exe<br />
<br />
//Name of the build<br />
BUILDNAME:STRING=Win32-vs71<br />
<br />
//Name of the computer/site where compile is being run<br />
SITE:STRING=DASH1.kitware<br />
<br />
")<br />
</syntaxhighlight><br />
<br />
Note that the above code is basically just one set() (page 330) command setting the value of CTEST_INITIAL_CACHE to a multiline string value. For Windows builds, these are the most common cache entries that need to be set prior to running the dashboard. The first three values control what compiler will be used to build this dashboard (Visual Studio 7.1 in this example). CVSCOMMAND might be found automatically, but if not it can be set here. The last two cache entries are the names that will be used to identify this dashboard submission on the dashboard.<br />
<br />
The next two variables work together to support additional directories and projects. For example, imagine that you had a separate data directory that you needed to keep up-to-date with your source directory. Setting the variables CTEST_CVS_COMMAND (page 677) and CTEST_EXTRA_UPDATES_1 tells CTest to perform a cvs update on the specified directory, with the specified arguments prior to running the dashboard. For example:<br />
<br />
<syntaxhighlight lang="text"><br />
# what cvs command to use for configuring this dashboard<br />
set (CTEST_CVS_COMMAND "C:/cygwin/bin/cvs.exe"<br />
<br />
# set any extra directories to do an update on<br />
set (CTEST_EXTRA_UPDATES_1<br />
"C:/Dashboards/My Tests/VTKData" "-dAP")<br />
</syntaxhighlight><br />
<br />
If you have more than one directory that needs to be updated you can use CTEST_EXTRA_UPDATES_2 through CTEST_EXTRA_UPDATES_9 in the same manner. The next variable you can set is called CTEST_ENVIRONMENT. This variable consolidates several set commands into a single command. Setting this variable allows you to set environment variables that will be used by the process running the dashboards. You can set as many environment variables as you want using the syntax shown below.<br />
<br />
<syntaxhighlight lang="text"><br />
# set any extra environment variables here<br />
set (CTEST_ENVIRONMENT<br />
*DISPLAY=:0"<br />
"USE_GCC_MALLOC=1"<br />
)<br />
# is the same as<br />
set (ENV{DISPLAY} 7:0")<br />
set (ENV{USE_GCC_MALLOC} "1")<br />
</syntaxhighlight><br />
<br />
The final general purpose option we will discuss is CTest's support for restoring a bad dashboard. In some cases, you might want to make sure that you always have a working build of the software. In other instances, you might use the resulting executables or libraries from one dashboard in the build process of another dashboard. If the first dashboard fails in either of these situations, it is best to drop back to the last previously working dashboard. You can do this i n CTest by setting CTEST_BACKUP_AND_RESTORE to true. When this is set to true, CTest will first backup the source and binary directories. It will then check out a new source directory and create a new binary directory. After that, it will run a full dashboard. If the dashboard is successful the backup directories are removed, if for some reason the new dashboard fails the new directories will be removed and the old directories restored. To make this work, you must also set the CTEST_CVS_CHECKOUT (page 677) variable. This should be set to the command required to check out your source tree. This doesn't actually have to be cvs, but it must result in a source tree in the correct location. Consider the following example:<br />
<br />
<syntaxhighlight lang="text"><br />
# do a backup and should the build fail restore,<br />
# if this is true you must set the CTEST_CVS_CHECKOUT<br />
# variable below.<br />
set (CTEST_BACKUP_AND_RESTORE TRUE)<br />
<br />
# this is the full cvs command to checkout the source dir<br />
# this will be run from the directory above the source dir<br />
set (CTEST_CVS_CHECKOUT<br />
"/usr/bin/cvs -d /cvsroot/FOO co -d FOO FOO"<br />
)<br />
</syntaxhighlight><br />
<br />
Note that whatever checkout command you specify will be run from the directory above the source directory. A typical nightly dashboard client script will look like this:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_SOURCE_NAME CMake)<br />
set (CTEST_BINARY_NAME CMake-gcc)<br />
set (CTEST_DASHBOARD_ROOT "$ENV{HOME}/Dashboards/My Tests")<br />
<br />
set (CTEST_SOURCE_DIRECTORY<br />
"${CTEST_DASHBOARD_ROOT}/${CTEST_SOURCE_NAME}")<br />
set (CTEST_BINARY_DIRECTORY<br />
"${CTEST_DASHBOARD_ROOT}/${CTEST_BINARY_NAME}")<br />
<br />
# which ctest command to use for running the dashboard<br />
set (CTEST_COMMAND<br />
"\"S (CTEST_EXECUTABLE_NAME} \"<br />
-D Nightly<br />
-A \"${CTEST_SCRIPT_DIRECTORY}/${CTEST_SCRIPT_NAME}\"")<br />
<br />
# what CMake command to use for configuring this dashboard<br />
set (CTEST_CMAKE_COMMAND "\"${CMAKE_EXECUTABLE_NAME}\"")<br />
<br />
# should ctest wipe the binary tree before running<br />
set (CTEST_START_WITH_EMPTY_BINARY_DIRECTORY TRUE)<br />
# this is the initial cache to use for the binary tree<br />
set (CTEST_INITIAL_CACHE *<br />
SITE: STRING=midworld.kitware<br />
BUILDNAME:STRING=DarwinG5-g++<br />
MAKECOMMAND:STRING=make -i -j2<br />
")<br />
<br />
# set any extra environment variables here<br />
set (CTEST_ENVIRONMENT<br />
"CC=gcc"<br />
"CXX=g++"<br />
)<br />
</syntaxhighlight><br />
<br />
<br />
====Settings for Continuous Dash boards====<br />
<br />
The next three variables are used for setting up continuous dashboards. As mentioned earlier a continuous<br />
dashboard is designed to run continuously throughout the day, providing quick feedback on the state of the<br />
software. If you are doing a continuous dashboard you can use CTEST_CONTINUOUS_DURATION and<br />
CTEST_CONTINUOUS_MINIMUM_INTERVAL to run the continuous repeatedly. The duration controls<br />
how long the script should run continuous dashboards, and the minimum interval specifies the shortest al<br />
lowed time between continuous dashboards. For example, say that you want to run a continuous dashboard<br />
from 9AM until 7PM and that you want no more than one dashboard every twenty minutes. To do thi s you<br />
would set the duration to 600 minutes (ten hours) and the minimum interval to 20 m inutes. If you run the<br />
test script at 9AM it w i l l start a continuous dashboard. When that dashboard finishes it w i l l check to see<br />
how much time has elapsed. If less than 20 minutes has elapsed CTest will sleep until the 20 m i nutes are up.<br />
If 20 or more minutes have elapsed then it w i l l immediately start another continuous dashboard. Do not be<br />
concerned that you w i l l end up with 30 dashboards a day (IO hours* three times an hour) . If there have been<br />
no changes to the source code, CTest will not build and submit a dashboard. It w i l l i nstead start waiting until<br />
the next interval is up and then check again. Using this feature just involves setting the following variables to<br />
the values you desire.<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_CONTINUOUS_DURATION 600)<br />
set (CTEST_CONTINUOUS_MINIMUM_INTERVAL 20)<br />
</syntaxhighlight><br />
<br />
Earlier, we introduced the CTEST_START_WITH_EMPTY_BINARY_DIRECTORY variable that can be set to start the dashboards with an empty binary directory. If this is set to true for a continuous dashboard then every continuous where there has been a change in the source code will result in a complete build from scratch. For larger projects this can significantly limit the number of continuous dashboards that can be generated in a day, while not using it can result in build errors or omissions because it is not a clean build. Fortunately there is a compromise: if you set CTEST_START_WITH_EMPTY_BINARY_DIRECTORY_ONCE to true, CTest will start with a clean binary directory for the first continuous build but not subsequent ones. Based on your settings for the duration this is an easy way to start with a clean build every morn ing, but use existing builds for the rest of the day.<br />
<br />
Another helpful feature to use with a continuous dashboard is the -I option. A large project may have so many tests that running all the tests limits how frequently a continuous dashboard can be generated. By adding -I,,7 (or -I,,5 etc) to the CTEST_COMMAND value, the continuous dashboard will only run every seventh test, significantly reducing the time required between continuous dashboards. For example:<br />
<br />
<syntaxhighlight lang="text"><br />
# these are the names of the source and binary directories<br />
set (CTEST_SOURCE_NAME CMake-cont)<br />
set (CTEST_BINARY_NAME CMakeBCC-cont)<br />
set (CTEST_DASHBOARD_ROOT "c:/Dashboards/My Tests")<br />
set (CTEST_SOURCE_DIRECTORY<br />
"${CTEST_DASHBOARD_ROOT}/${CTEST_SOURCE_NAME}")<br />
set (CTEST_BINARY_DIRECTORY<br />
"${CTEST_DASHBOARD_ROOT}/${CTEST_BINARY_NAME}")<br />
<br />
# which ctest command to use for running the dashboard<br />
set (CTEST_COMMAND<br />
"\"${CTEST_EXECUTABLE_NAME} \"<br />
-D Continuous<br />
-A \"${CTEST_SCRIPT_DIRECTORY}/${CTEST_SCRIPT_NAME}\"")<br />
<br />
# what CMake command to use for configuring this dashboard<br />
set (CTEST_CMAKE_COMMAND "\"${CMAKE_EXECUTABLE_NAME}\"")<br />
<br />
# this is the initial cache to use for the binary tree<br />
set (CTEST_INITIAL_CACHE "<br />
SITE:STRING=dash14.kitware<br />
BUILDNAME:STRING=Win32-bcc5.6<br />
CMAKE_GENERATOR:INTERNAL=Borland Makefiles<br />
CVSCOMMAND:FILEPATH=C:/Program Files/TortoiseCVS/cvs.exe<br />
CMAKE_CXX_FLAGS:STRING=-w- -whid -waus -wpar -tWM<br />
CMAKE_C_FLAGS:STRING=-w- -whid -waus -tWM<br />
")<br />
<br />
# set any extra environment variables here<br />
set (ENV{PATH} "C:/Program Files/Borland/CBuilder6/Bin\;<br />
C:/Program Files/Borland/CBuilder6/Projects/Bpl"<br />
)<br />
</syntaxhighlight><br />
<br />
<br />
====Variables Ava i lable i n CTest Scri pts====<br />
<br />
There are a few variables that will be set before your script executes. The first two variables are the directory the script is in, CTEST_SCRIPT_DIRECTORY, and name of the script itself CTEST_SCRIPT_NAME. These two variables can be used to make your scripts more portable. For example, if you wanted to include the script itself as a note for the dashboard you could do the following:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_COMMAND<br />
"\"${CTEST_EXECUTABLE_NAME}\" -D Continuous<br />
-A \"${CTEST_SCRIPT_DIRECTORY}/${CTEST_SCRIPT_NAME}\""<br />
)<br />
</syntaxhighlight><br />
<br />
Another variable you can use is CTEST_SCRIPT_ARG. This variable can be set by providing a comma separated argument after the script name when invoking CTest -S. For example CTest -s foo.cmake, 21 would result in CTEST_SCRIPT_ARG being set to 21.<br />
<br />
<br />
====Limitations of Traditional CTest Scripting====<br />
<br />
The traditional CTest scripting described in this section has some limitations. The first is that the dashboard will always fail if the Configure step fails. The reason for that is that the input files for CTest are actually generated by the Configure step. To make things worse, the update step will not happen and the dashboard will be stuck. To prevent this, an additional update step is necessary. This can be ach ieved by adding CTEST_EXTRA_UPDATES_1 variable with "-D yesterday" or similar flag. This will update the repository prior to doing a dashboard. Since it will update to yesterday's time stamp, the actual update step of CTest will find the files that were modified since the previous day.<br />
<br />
The second limitation of traditional CTest scripting is that it is not actually scripting. We only have control over what happens before the actual CTest run, but not what happens during or after. For example, if we want to run the testing and then move the binaries somewhere, or if we want to build the project, do some extra tasks and then run tests or something similar, we need to perform several complicated tasks, such as run CMake with -P option as a part of CTEST_COMMAND.<br />
<br />
<br />
===Advanced CTest Scripting===<br />
<br />
The CTest scripting described in the previous section is still valid and will still work. This section describes how to write command-based CTest scripts that allow the maintainer to have much more fine-grained control over the individual steps of a dashboard.<br />
<br />
<br />
====Extended CTest Scripting====<br />
<br />
To overcome the limitations of traditional CTest scripting, CTest provides an extended scripting mode. In this mode, the dashboard maintainer has access to individual CTest command functions, such as ctest_configure and ctest_build. By running these functions individually, the user can flexibly develop custom testing schemes. Here's an example of an extended CTest script<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required (VERSION 2.2)<br />
<br />
set (CTEST_SITE "andoria.kitware")<br />
set (CTEST_BUILD_NAME "Linux-g++")<br />
set (CTEST_NOTES FILES<br />
"$(CTEST_SCRIPT_DIRECTORY}/${CTEST_SCRIPT_NAME}")<br />
<br />
set (CTEST_DASHBOARD_ROOT "$ENV{HOME}/Dashboards/My Tests")<br />
set (CTEST_SOURCE_DIRECTORY "${CTEST_DASHBOARD_ROOT}/CMake")<br />
set (CTEST_BINARY_DIRECTORY "${CTEST_DASHBOARD_ROOT}/CMake-gcc")<br />
<br />
set (CTEST_UPDATE_COMMAND "/usr/bin/cvs")<br />
set (CTEST_CONFIGURE_COMMAND<br />
"\"$({CTEST_SOURCE_DIRECTORY}/bootstrap\"")<br />
set (CTEST_BUILD_COMMAND "/usr/bin/make -j 2")<br />
<br />
ctest_empty_binary_directory (${CTEST_BINARY_DIRECTORY})<br />
<br />
ctest_start (Nightly)<br />
ctest_update (SOURCE "${CTEST_SOURCE_DIRECTORY}")<br />
ctest_configure (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_build (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_test (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_submit ()<br />
</syntaxhighlight><br />
<br />
The first line is there to make sure an appropriate version of CTest is used. The advanced scripting was introduced in CTest 2.2. The CMake parser is used, and so all scriptable commands from CMake are available. This includes the CMake_minimum_required command:<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required (VERSION 2.2)<br />
</syntaxhighlight><br />
<br />
Overall, the layout of the rest of this script is similar to a traditional one. There are several settings that CTest will use to perform its tasks. Then, unlike with traditional CTest, there are the actual tasks that CTest will perform. Instead of providing information in the project's CMake cache, in this scripting mode all the information is provided to CTest. For compatibility reasons we may choose to write the information to the cache, but that is up to the dashboard maintainer. The first block contains the variables about the submission.<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_SITE "andoria.kitware")<br />
set (CTEST_BUILD NAME *Linux-g++")<br />
set (CTEST_NOTES_FILES<br />
"${CTEST_SCRIPT_DIRECTORY}/${CTEST_SCRIPT_NAME}")<br />
</syntaxhighlight><br />
<br />
These variables serve the same role as the SITE and BUILD_NAME cache variables. They are used to identify the system once it submits the results to the dashboard. CTEST_NOTES_FILES is a list of files that should be submitted as the notes of the dashboard submission. This variable corresponds to the -A flag of CTest.<br />
<br />
The second block describes the information that CTest functions will use to perform the tasks:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_DASHBOARD_ROOT "$ENV{HOME}/Dashboards/My Tests")<br />
set (CTEST_SOURCE_DIRECTORY "${CTEST_DASHBOARD_ROOT}/CMake")<br />
set (CTEST_BINARY_DIRECTORY "${CTEST_DASHBOARD_ROOT}/CMake-gcc ")<br />
set (CTEST_UPDATE_COMMAND "/usr/bin/cvs")<br />
set (CTEST_CONFIGURE_COMMAND<br />
"\"${CTEST_SOURCE_DIRECTORY}/bootstrap\"")<br />
set (CTEST_BUILD_COMMAND "/usr/bin/make -j 2")<br />
</syntaxhighlight><br />
<br />
The CTEST_SOURCE_DIRECTORY and CTEST_BINARY_DIRECTORY serve the same purpose as in the traditional CTest script. The only difference is that we will be able to override these variables later on when calling the CTest functions, if necessary. The CTEST_UPDATE_COMMAND is the path to the command used to update the source directory from the repository. Currently CTest supports Concurrent Versions System (CVS), Subversion, Git, Mercurial, and Bazaar.<br />
<br />
Both the configure and build handlers support two modes. One mode is to provide the full command that will be invoked during that stage. This is designed to support projects that do not use CMake as their configuration or build tool . In this case, you specify the full command lines to configure and build your project by setting the CTEST_CONFIGURE_COMMAND and CTEST_BUILD_COMMAND variables respectively. This is similar to specifying CTEST_CMAKE_COMMAND in the traditional CTest scripting.<br />
<br />
For projects that use CMake for their configuration and build steps you do not need to specify the command<br />
lines for configuring and building your project. Instead, you will specify the CMake generator to use by setting the CTE ST_CMAKE_GENE RATOR variable. This way CMake will be run with the appropriate generator.<br />
One example of this is:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_CMAKE_GENERATOR "Visual Studio 8 2005")<br />
</syntaxhighlight><br />
<br />
For the build step you should also set the variables CTEST_PROJECT_NAME and CTEST_BUILD_CONFIGURATION, to specify how to build the project. In this case CTEST_PROJECT_NAME will match the top level CMakeLists file's PROJECT command, and therefore also match the name of the generated Visual Studio *.sln file. The CTEST_BUILD_CONFIGURATION should be one of Release, Debug, MinSizeRel, or RelWithDeblnfo. Additionally, CTEST_BUILD_FLAGS can be provided as a hint to the build command. An example of testing for a CMake based project would be:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_CMAKE GENERATOR "Visual Studio 8 2005")<br />
set (CTEST_PROJECT_NAME "Grommit")<br />
set (CTEST_BUILD_CONFIGURATION "Debug")<br />
</syntaxhighlight><br />
<br />
The final block performs the actual testing and submission:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest_empty_binary directory (${CTEST_BINARY_DIRECTORY})<br />
<br />
ctest_start (Nightly)<br />
<br />
ctest_update (SOURCE<br />
"${CTEST_SOURCE_DIRECTORY}" RETURN_VALUE res)<br />
ctest_configure (BUILD<br />
"${CTEST_BINARY_DIRECTORY}" RETURN_VALUE res)<br />
ctest_build (BUILD "${CTEST_BINARY_DIRECTORY}" RETURN_VALUE res)<br />
ctest_test (BUILD "${CTEST_BINARY_DIRECTORY}" RETURN_VALUE res)<br />
ctest_submit (RETURN_VALUE res)<br />
</syntaxhighlight><br />
<br />
The ctest_empty_binary_directory command empties the directory and all subdirectories. Please note that this command has a safety measure built in, which is that it will only remove the directory if there is a CMakeCache.txt file in the top level directory. This was intended to prevent CTest from mistakenly removing a non-build directory.<br />
<br />
The rest of the block contains the calls to the actual CTest functions. Each of them corresponds to a CTest -D option. For example, instead of:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest -D ExperimentalBuild<br />
</syntaxhighlight><br />
<br />
the script would contain:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest_start (Experimental)<br />
ctest_build (BUILD "${CTEST_BINARY_DIRECTORY}" RETURN_VALUE res)<br />
</syntaxhighlight><br />
<br />
Each step yields a return value, which indicates if the step was successful. For example, the return value of the Update stage can be used in a continuous dashboard to determine if the rest of the dashboard should be run.<br />
<br />
To demonstrate some advantages of using extended CTest scripting, let us examine a more advanced CTest script. This script drives testing of an application called Slicer. Slicer uses CMake internally, but it drives the build process through a series of Tcl scripts. One of the problems of this approach is that it does not support out-of-source builds. Also, on Windows certain modules come pre-built, so they have to be copied to the build directory. To test a project like that, we would use a script like this:<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required (VERSION 2.2)<br />
<br />
# set the dashboard specific variables -- name and notes<br />
set (CTEST_SITE "dash11.kitware")<br />
set (CTEST_BUILD_NAME "Win32-VS71")<br />
set (CTEST_NOTES_FILES<br />
"${CTEST_SCRIPT_DIRECTORY}/${CTEST_SCRIPT_NAME}")<br />
<br />
# do not let any single test run for more than 1500 seconds<br />
set (CTEST_TIMEOUT "1500")<br />
<br />
# set the source and binary directories<br />
set (CTEST_SOURCE_DIRECTORY "C:/Dashboards/MyTests/slicer2"}<br />
set (CTEST_BINARY_DIRECTORY "$(CTEST_SOURCE_DIRECTORY}-build")<br />
<br />
set (SLICER_SUPPORT<br />
"//Dash11/Shared/Support/SlicerSupport/Lib")<br />
set (TCLSH "${SLICER_SUPPORT}/win32/bin/tclsh84.exe")<br />
<br />
# set the complete update, configure and build commands<br />
set (CTEST_UPDATE_COMMAND<br />
"C:/Program Files/TortoiseCVS/cvs.exe")<br />
set (CTEST_CONF IGURE_COMMAND<br />
"\"${TCLSH}\"<br />
\"$(CTEST_BINARY_DIRECTORY}/Scripts/genlib.tcl\"")<br />
set (CTEST_BUILD COMMAND<br />
"\"${TCLSH}\"<br />
\"${CTEST_BINARY_DIRECTORY}/Scripts/cmaker.tcl\**)<br />
<br />
# clear out the binary tree<br />
file (WRITE "${CTEST_BINARY_DIRECTORY}/CMakeCache.txt"<br />
"// Dummy cache just so that ctest will wipe binary dir")<br />
ctest_empty_binary_directory (${CTEST_BINARY_DIRECTORY})<br />
<br />
# special variables for the Slicer build process<br />
set (ENV{MSVC6} "O")<br />
set (ENV{GENERATOR} "Visual Studio 7 .NET 2003")<br />
set (ENV{MAKE} "devenv.exe ")<br />
set (ENV (COMPILER_PATH)<br />
"C:/Program Files/Microsoft Visual Studio .NET<br />
2003/Common7/Vc7/bin")<br />
set (ENV(CVS} "$({CTEST_UPDATE_COMMAND}")<br />
<br />
# start and update the dashboard<br />
ctest_start (Nightly)<br />
ctest_update (SOURCE "${CTEST_SOURCE_DIRECTORY}")<br />
# define a macro to copy a directory<br />
macro (COPY_DIR sredir destdir)<br />
exec_program ("${CMAKE_EXECUTABLE_NAME)" ARGS<br />
"-E copy_directory \"${sredir}\"\"${destdir}\"")<br />
endmacro ()<br />
<br />
# Slicer does not support out of source builds so we<br />
# first copy the source directory to the binary directory<br />
# and then build it<br />
copy_dir ("${CTEST_SOURCE_DIRECTORY}"<br />
"${CTEST_BINARY_DIRECTORY}")<br />
<br />
# copy support libraries that slicer needs into the binary tree<br />
copy_dir ("${SLICER_SUPPORT}"<br />
"${CTEST_BINARY_DIRECTORY}/Lib")<br />
<br />
# finally do the configure, build, test and submit steps<br />
ctest_configure (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_build (BUILD "${CTEST_BINARY_DIRECTORY }")<br />
ctest_test (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_submit ()<br />
</syntaxhighlight><br />
<br />
With extended CTest scripting we have full control over the flow, so we can perform arbitrary commands at any point For example, after performing an update of the project, the script copies the source tree into the build directory. This allows it to do an "out-of-source" build.<br />
<br />
<br />
===Setting up a Dashboard Server===<br />
<br />
For many projects, using Kitware's my.cdash.org dashboard hosting will be sufficient. If that is the case for you, then you can skip this section. If you wish to setup your own server, then this section will walk you through the process. There are a few options for what to run on the server to process the dashboard results. The preferred option is to use CDash, a dashboard server based on PHP, MySQL, CSS, and XSLT. Predecessors to CDash such as DART 1 and DART 2 can also be used. Information on the DART systems can be found at http://www.itk.org/Dart/HTML/lndex.shtml.<br />
<br />
<br />
====CDash Server====<br />
<br />
CDash is a dashboard server developed by Kitware that is based on the common "LAMP stack." It makes use of PHP, CSS, XSL, MySQL/PostgreSQL, and of course your web server (normally Apache). CDash takes the dashboard submissions as XML and stores them in an SQL database (currently MySQL and PostgreSQL are supported). When the web server receives requests for pages, the PHP scripts extract the relevant data from the database and produce XML that is sent to XSL templates, that in turn convert it into HTML. CSS is used to provide the overall look and feel for the pages. CDash can handle large projects, and has been hosting up to 30 projects on a reasonable web-server, with just over 200 million records and about 89 Gigabytes in the database, stored on a separate database-server machine.<br />
<br />
<br />
=====Server requirements=====<br />
<br />
* MySQL (5.x and higher) or PostgreSQL (8.3 and higher)<br />
* PHP (5.0 recommended)<br />
* XSL module for PHP (apt-get install php5-xsl)<br />
* cURL module for PHP<br />
* GD module for PHP<br />
<br />
=====Gettting CDash=====<br />
<br />
You can get CDash from the www.cdash.org website, or you can get the latest code from SYN using the following command:<br />
<br />
<syntaxhighlight lang="text"><br />
svn co https://www.kitware.com/svn/CDash/trunk CDash<br />
</syntaxhighlight><br />
<br />
=====Quick installation=====<br />
<br />
1. Unzip or checkout CDash in your webroot directory on the server. Make sure the web server has read permission to the files<br />
<br />
2. Create a cdash/config.local.php and add the following lines, adapted for your server configuration:<br />
<br />
<syntaxhighlight lang="text"><br />
// Hostname of the database server<br />
SCDASH_DB_HOST = 'localhost';<br />
<br />
// Login for database access<br />
SCDASH_DB_LOGIN = 'root';<br />
<br />
// Password for database access<br />
SCDASH_DB_PASS = '';<br />
<br />
// Name of the database<br />
SCDASH_DB_NAME = 'cdash';<br />
<br />
// Database type<br />
SCDASH_DB_TYPE = 'mysql';<br />
</syntaxhighlight><br />
<br />
3. Point your web browser to the install.php script:<br />
<br />
<syntaxhighlight lang="text"><br />
http://mywebsite.com/CDash/install.php<br />
</syntaxhighlight><br />
<br />
4. Follow the installation instructions<br />
<br />
5. When the installation is done, add the following line in the config.local.php to ensure the in stallation script is no longer accessible<br />
<br />
<syntaxhighlight lang="text"><br />
$CDASH_PRODUCTION_MODE = true;<br />
</syntaxhighlight><br />
<br />
<br />
=====Testing the installation=====<br />
<br />
In order to test the installation of the CDash server, you can download a small test project and test the submission to CDash by following these steps:<br />
<br />
1. Download and unzip the test project at:<br />
<br />
<syntaxhighlight lang="text"><br />
http://www.cdash.org/download/CDashTest.zip<br />
</syntaxhighlight><br />
<br />
2. Create a CDash project named "test" on your CDash server (see 10.7 Producing Test Dashboards)<br />
<br />
3. Download the CTestConfig.cmake file from the CDash server, replacing the existing one in CDashTest with the one from your server<br />
<br />
4. Run CMake on CDashTest to configure the project<br />
<br />
5. Run:<br />
<br />
<syntaxhighlight lang="text"><br />
make Experimental<br />
</syntaxhighlight><br />
<br />
6. Go to the dashboard page for the "test" project, you should see the submission in the Experimental section.<br />
<br />
<br />
====Advanced Server Management====<br />
<br />
=====Project Roles : CDash supports three role levels for users:=====<br />
<br />
* Normal users are regular users with read and/or write access to the project's code repository.<br />
* Site maintainers are responsible for periodic submissions to CDash .<br />
* Project administrators have reserved privileges to administer the project in CDash.<br />
<br />
The first two levels can be defined by the users themselves. Project administrator access must be granted by another administrator of the project, or a CDash server administrator.<br />
<br />
In order to change the current role for a user:<br />
<br />
# Select [Manage project roles] in the administration section<br />
# If you have more than one project, select the appropriate project<br />
# In the "current users" section, change the role for a user<br />
# Click "update" to update the current role<br />
# In order to completely remove a user from a project, click "remove"<br />
# If the CVS login is not correct it canbe changed from this page. Note that users can also change their CVS login manually from their profile<br />
<br />
In order to add a current role for a user:<br />
<br />
# Select [Manage project roles] in the administration section<br />
# Then, if you have more than one project, select the appropriate project<br />
# In the "Add new user" section type the first letters of the first name, last name, or email address of the user you want to add. Or type '%' in order to show all the users registered in CDash<br />
# Select the appropriate user's role<br />
# Optionally enter the user's CVS login<br />
# Click on "add user"<br />
<br />
<<Figure 10.5 : Project Role management page in CDash>><br />
<br />
<br />
=====Impo rting users : to batch i m po rt a l i st of current users for a g i ven project=====<br />
<br />
1. Click on [manage project role] in the administration section<br />
2. Select the appropriate project<br />
3. Click "Browse" to select a CVS users file.<br />
4. The file should be formatted as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
cvsuser:email:first_name last_name<br />
</syntaxhighlight><br />
<br />
5. Click "import"<br />
6. Make sure the reported names and email addresses are correct; deselect any that should not be imported<br />
7. Click on "Register and send email". This will automatically register the users, set a random password and send a registration request to the appropriate email addresses.<br />
<br />
<br />
=====Google Analytics=====<br />
<br />
Usage statistics of the CDash server can be assessed using Google Analytics. In order to setup google analytics:<br />
<br />
# Go to http://www.google.com/analytics/index.html<br />
# Setup an account, if necessary<br />
# Add a website project<br />
# Login into CDash as the administrator of a project<br />
# Click on "Edit Project"<br />
# Add the code from Google into the Google Analytics Tracker (i.e. UA-43XXXX-X) for your project<br />
<br />
<br />
=====Submission backup=====<br />
<br />
CDash backups all the incoming XML submissions and places them in the backup directory by default. The default timeframe is 48 hours. The timeframe can be changed in the config.local.php as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
$CDASH_BACKUP_TIMEFRAME=72;<br />
</syntaxhighlight><br />
<br />
If projects are private it is recommended to set the backup directory outside of the apache root directory to make sure that nobody can access the XML files, or to add the following lines to the .htaccess in the backup directory:<br />
<br />
<syntaxhighlight lang="text"><br />
<Files *><br />
order allow,deny<br />
deny from all<br />
</Files><br />
</syntaxhighlight><br />
<br />
Note that the backup directory is emptied only when a new submission arrives. If necessary, CDash can also import builds from the backup directory.<br />
<br />
# Log into CDash as administrator<br />
# Click on [Import from backups] in the administration section<br />
# Click on "Import backups"<br />
<br />
<br />
====Build Groups====<br />
<br />
Builds can be organized by groups. In CDash, three groups are defined automatically and cannot be removed: Nightly, Continuous and Experimental. These groups are the same as the ones imposed by CTest. Each group has an associated description that is displayed when clicking on the name of the group on the main dashboard.<br />
<br />
<br />
=====To add a new group:=====<br />
<br />
# Click on [manage project groups] in the administration section<br />
# Select the appropriate project<br />
# Under the "create new group" section enter the name of the new group<br />
# Click on "create group". The newly created group appears at the bottom of the current dashboard<br />
<br />
<br />
=====To order groups:=====<br />
<br />
# Click on [manage project groups] in the administration section<br />
# Select the appropriate project<br />
# Under the "Current Groups" section, click on the [up] or [down] links. The order displayed in this page is exactly the same as the order on the dashboard<br />
<br />
<br />
=====To update group description:=====<br />
<br />
# Click on [manage project groups] in the adm inistration section<br />
# Select the appropriate project<br />
# Under the "Current Groups" section, update or add a description in the field next to the [up]/[down] links<br />
# Click "Update Description" in order to commit your changes<br />
<br />
By default, a build belongs to the group associated with the build type defined by CTest, i.e. a nightly build will go in the nightly section. CDash matches a build by its name, site, and build type. For instance, a nightly build named "Linux-gcc-4.3" from the site "midworld.kitware" will be moved to the nightly section unless a rule on "Linux-gcc-4.3"-"midworld.kitware"-"Nightly" is defined. There are two ways to move a build into a given group by defining a rule: Global Move and Single Move.<br />
<br />
<br />
=====Global move all ows moving builds in batch.=====<br />
<br />
# Click on [manage project groups] in the administration section.<br />
# Select the appropriate project (if more than one).<br />
# Under "Global Move" you will see a list of the builds submitted in the past 7 days (without duplicates). Note that expected builds are also shown, even if they have not been submitting for the past 7 days.<br />
# You can narrow your search by selecting a spec ific group (default is All).<br />
# Select the build s to move. Hold "shift" in order to select multiple builds .<br />
# Select the target group. This is mandatory.<br />
# Optionally check the "expected" box if you expect the builds to be submitted on a daily basis. For more information on expected builds, see the "Expected builds" section below.<br />
# Click "Move Selected Builds to Group" to move the groups.<br />
<br />
<br />
=====Single move allows modifying only a particular build.=====<br />
<br />
If logged in as an administrator of the project, a small folder icon is displayed next to each build on the main dashboard page. Clicking on the icon shows some options for each build. In particular, project administrators can mark a build as expected, move a build to a specific group, or delete a bogus build.<br />
<br />
Expected builds: Project administrators can mark certain builds as expected. That means builds are expected to submit daily. This all ows you to quickly check if a build has not been submitting on today's dash board, or to quickly assess how long the build has been missing by clicking on the info icon on the main dashboard.<br />
<br />
<<Figure 10.6: Information regarding a build from the main dash board page>><br />
<br />
If an expected build was not submited the previous day and the option "Email Build Missing" is checked for the project, an email will be sent to the site maintainer and project administrator to alert them (see the Sites section for more information).<br />
<br />
<br />
====Email====<br />
<br />
CDash sends email to developers and project administrators when a failure occurs for a given build. The configuration of the email feature is located in three places: administration section, and the project's groups section .<br />
<br />
In the the config.local.php file, the project config.local.php, two variables are defined to specify the email address from which email is sent and the reply address. Note that the SMTP server can not be defined in the current version of CDash, it is assumed that a local email server is running on the machine.<br />
<br />
<syntaxhighlight lang="text"><br />
$CDASH_EMAIL_FROM = 'admin@mywebsite.com';<br />
$CDASH_EMAIL_REPLY = 'noreply@mywebsite.com';<br />
</syntaxhighlight><br />
<br />
<<Figure 10.7: Build Group Configuration Page>><br />
<br />
In the email configuration section of the project, several parameters can be tuned to control the email feature. These parameters were described in the previous section, "Adding CDash Support to a Project" .<br />
<br />
In the "build groups" admini stration section of a project, an administrator can decide if emails are sent to a specific group, or if only a summary email should be sent. The summary email is sent for a given group when at least one build is failing on the current day.<br />
<br />
<br />
====Sites====<br />
<br />
CDash refers to a site as an individual machine submitting at least one build to a given project. A site might submit multiple builds (e.g. nightly and continuous) to multiple projects stored in CDash.<br />
<br />
In order to see the site description, click on the name of the site from the main dashboard page for a project. The description of a site includes information regarding the processor type and speed, as well as the amount of memory available on the given machine. The description of a site is automatically sent by CTest, however in some cases it might be required to manually edit it. Moreover, if the machine is upgraded, i.e. the memory is upgraded; CDash keeps track of the history of the description, allowing users to compare performance before and after the upgrade.<br />
<br />
Sites usually belong to one maintainer, responsible for the submissions to CDash. It is important for site maintainers to be warned when a site is not submitting as it could be related to a configuration issue. In order to claim a site, a maintainer should<br />
<br />
# Log into CDash<br />
# Click on a dashboard containing at least one build for the site<br />
# Click on the site name to open the description of the site<br />
# Click on [claim this site]<br />
<br />
Once a site is claimed, its maintainer will receive emails if the client machine does not submit for an unknown reason, assuming that the site is expected to submit nightly. Furthermore, the site will appear in the "My Sites" section of the maintainer's profile, facilitating a quick check of the site's status.<br />
<br />
Another feature of the site page is the pie chart showing the load of the machine. Assuming that a site submits to multiple projects, it is usually useful to know if the machine has room for other submissions to CDash. The pie chart gives an overview of the machine submission time for each project.<br />
<br />
====Graphs====<br />
<br />
CDash curently plots three types of graph. The graphs are generated dynamically from the database records, and are interactive.<br />
<br />
<<Figure 10.8: Pie chart showing how much time is spent by a given site on building CDash projects>><br />
<br />
<<Figure 10.9: Map showing the location of the different sites building>><br />
<br />
<<Figure 10.10: Example of build time over time>><br />
<br />
The build time graph displays the time required to build a project over time. In order to display the graph you need to:<br />
<br />
# Go to the main dashboard for the project.<br />
# Click on the build name you want to track.<br />
# On the build summary page, click on [Show Build Time Graph].<br />
<br />
The test time graphs display the time to run a specific test as well as its status (passed/Failed) over time. To display it:<br />
<br />
# Go to the main dashboard for a project.<br />
# Click on the number of test passed or failed.<br />
# From the list of tests, click on the status of the test.<br />
# Click on [Show Test Time Graph] and/or [Show Failing/Passing Graph].<br />
<br />
<br />
====Adding Notes to a Build====<br />
<br />
In some cases, it is useful to inform other developers that someone is currently looking at the errors for a build. CDash implements a simple note mechanism for that purpose:<br />
<br />
# Login to CDash.<br />
# On the dashboard project page, click on the build name that you would like to add the note to.<br />
# Click on the [Add a Note to this Build] link, located next to the current build matrix (see thumbnail).<br />
# Enter a short message that will be added as a note.<br />
# Select the status of the note: Simple note, Fix in progress Fixed.<br />
# Click on "Add Note".<br />
<br />
<br />
====Logging====<br />
<br />
CDash supports an internal logging mechanism using the error_log() PHP function. Any critical SQL errors are logged. By default, the CDash log file is located in the backup directory under the name cdash.log. The location of the log file can be modified by changing the variable in the config.local.php configuration file.<br />
<br />
<syntaxhighlight lang="text"><br />
$CDASH_BACKUP_DIRECTORY='/vat/temp/cdashbackup/log';<br />
</syntaxhighlight><br />
<br />
The log file can be accessed directly from CDash if the log file is in the standard location:<br />
<br />
# Log into CDash as administrator.<br />
# Click on [CDash logs] in the administration section.<br />
# Click on cdash.log to see the log file.<br />
<br />
CDash 2.0 introduced a log file rotation feature.<br />
<br />
<br />
====Test Timing====<br />
<br />
CDash supports checks o n the duration o f tests. CDash keeps the current weighted average o f the mean and<br />
standard deviation for the time each test takes to run in the database. In order to keep the computation as<br />
efficient as possible the fol lowing formula is used, which only involves the previous build.<br />
<br />
<syntaxhighlight lang="text"><br />
// alpha is the current "window" for the computation<br />
// By default, alpha is 0.3<br />
newMean = (1-alpha) * oldMean + alpha * currentTime<br />
<br />
newSD = sqrt((1-alpha) * SD * SD +<br />
alphas (currentTime-newMean) * (currentTime-newMean)<br />
</syntaxhighlight><br />
<br />
A test is defined as having failed timing based on the following logic :<br />
<br />
<syntaxhighlight lang="text"><br />
if previousSD < thresholdSD then previousSD = thresholdSD<br />
if currentTime > previousMean + multiplier * previousSD then fail<br />
</syntaxhighlight><br />
<br />
<br />
====Mobile Support====<br />
<br />
Since CDash is written using template layers via XSLT, developing new layouts is as simple as adding new rendering templates. As a demonstration, an iPhone web template is provided with the current version of CDash.<br />
<br />
<syntaxhighlight lang="text"><br />
</syntaxhighlight><br />
<br />
The main page shows a list of the public projects hosted on the server. Clicking on the name of a project loads its current dashboard. In the same manner, clicking on a given build displays more detailed information about that build. As of this writing, the ability to login and to access private sections of CDash are not supported with this layout.<br />
<br />
<br />
====Backing up CDash====<br />
<br />
All of the data (except the logs) used by CDash is stored in its database. It is important to backup the database regularly, especially so before performing a CDash upgrade. There are a couple of ways to backup a MySQL database. The easiest is to use the mysqldump<pre>http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html</pre> command:<br />
<br />
<syntaxhighlight lang="text"><br />
mysqldump -r cdashbackup.sql cdash<br />
</syntaxhighlight><br />
<br />
If you are using My ISAM tables exclusively, you can copy the CDash directory in your MySQL data directory. Note that you need to shutdown MySQL before doing the copy so that no file could be changed during the copy. Similarly to MySQL, PostGreSQL has a pg_dump utility:<br />
<br />
<syntaxhighlight lang="text"><br />
pg_dump -U postgreSQL_user cdash > cdashbackup.sql<br />
</syntaxhighlight><br />
<br />
<br />
====Upgrading CDash====<br />
<br />
When a new version of CDash i s released or if you decide to update from the SVN repository, CDash will<br />
warn you on the front page i f the current database needs to be upgraded. When upgrading to a new release<br />
version the following steps should be taken:<br />
<br />
# Backup your SQL database (see previous section).<br />
# Backup your config.local.php (or config.php) configuration files.<br />
# Replace your current cdash directory with the latest version and copy the config.local.php in the cdash directory.<br />
# Navigate your browser to your CDash page. (e.g. http://localhost/CDash).<br />
# Note the version number on the main page, it should match the version that you are upgrading to.<br />
# The following message may appear: "The current database shema doesn't match the version of CDash you are running, upgrade your database structure in the Administration panel of CDash." This is a helpful reminder to perform the following steps.<br />
# Login to CDash as administrator.<br />
# In the 'Administration' section, click on '[CDash Maintenance]'.<br />
# Click on 'Upgrade CDash': this process might take some time depending on the size of your database (do not close your browser).<br />
#* Progress messages may appear wh ile CDash performs the upgrade.<br />
#* If the upgrade process takes too long you can check in the backup/cdash.log file to see where the process is taking a long time and/or failing.<br />
#* It has been reported that on some systems the spinning icon never turns into a check mark. Please check the cdash.log for the "Upgrade done." string if you feel that the upgrade is taking too long.<br />
#* On a 50GB database the upgrade might take up to 2 hours.<br />
# Some web browsers might have issues when upgrading (with some javascript variables not being passed<br />
correctly), in that case you can perform individual updates. For example, upgrading from CDash 1-2 to 1-4:<br />
<br />
<syntaxhighlight lang="text"><br />
http://mywebsite.com/CDash/backwardCompatibilityTools.php?updatede-1-4=1<br />
</syntaxhighlight><br />
<br />
<<Figure 10.11: Example of dashboard on the iPhone>><br />
<br />
<br />
====CDash Maintenance====<br />
<br />
Database maintenance: we recommend that you perform database optimization (reindexing, purging, etc.) regularly to maintain a stable database. MySQL has a utility called mysqlcheck, and PostgreSQL has several utilities such as vacuumdb.<br />
<br />
Deleting builds with incorrect dates: some builds might be submitted to CDash with the wrong date, either because the date in the XML file is incorrect or the timezone was not recognized by CDash (mainly by PHP). These builds will not show up in any dashboard because the start time is bogus. In order to remove these builds:<br />
<br />
# Login to CDash as administrator.<br />
# Click on [CDash maintenance] in the administration section.<br />
# Click on 'Delete builds with wrong start date'.<br />
<br />
Recompute test timing: if you just upgraded CDash you might notice that the current submissions are showing a high number of failing test due to time defects. This is because CDash does not have enough sample points to compute the mean and standard deviation for each test, in particular the standard deviation might be very small (probably zero for the first few samples). You should turn the "enable test timing" off for about a week, or until you get enough build submissions and CDash has calculated an approximate mean and standard deviation for each test time.<br />
<br />
The other option is to force CDash to compute the mean and standard deviation for each test for the past few days. Be warned that this process may take a long time, depending on the number of test and projects involved. In order to recompute the test timing:<br />
<br />
# Login to CDash as administrator.<br />
# Click on [CDash maintenance] in the administration section.<br />
# Specify the number of days (default is 4) to recompute the test timings for.<br />
# Click on "Compute test timing". When the process is done the new mean, standard deviation, and status should be updated for the tests submitted during this period.<br />
<br />
<br />
=====Automatic build removal=====<br />
<br />
In order to keep the database at a reasonable size, CDash can automatically purge old builds. There are currently two ways to setup automatic removal of builds: without a cronjob, edit the config.local.php and add/edit the following line<br />
<br />
<syntaxhighlight lang="text"><br />
$CDASH_AUTOREMOVE_BUILDS='1';<br />
</syntaxhighlight><br />
<br />
CDash will automatically remove builds on the first submission of the day. Note that removing builds might add an extra load on the database, or slow down the current submission process if your database is large and the number of submissions is high. If you can use a cronjob the PHP command line tool can be used to trigger build removals at a convenient time. For example, removing the builds for all the projects at 6am every Sunday:<br />
<br />
<syntaxhighlight lang="text"><br />
0 6 * * 0 php5 /var/www/CDash/autoRemoveBuilds.php all<br />
</syntaxhighlight><br />
<br />
Note that the 'all' parameter can be changed to a specific project name in order to purge buil ds from a single project.<br />
<br />
<br />
=====CDash XML Schema=====<br />
<br />
The XML parsers in CDash can be easily extended to support new features. The current XML schemas generated by CTest, and their features as described in the book, are located at:<br />
<br />
<syntaxhighlight lang="text"><br />
http://public.kitware.com/Wiki/CDash:XML<br />
</syntaxhighlight><br />
<br />
====Subprojects====<br />
<br />
CDash (versions 1.4 and later) supports splitting projects into subprojects. Some of the subprojects may in turn depend on other subprojects. A typical real life project consists of libraries, executables, test suites, documentation, web pages, and installers. Organizing your project into well-defined subprojects and presenting<br />
the results of nightly builds on a CDash dashboard can help identify where the problems are at different levels of granularity.<br />
<br />
A project with subprojects has a different view for its top level CDash page than a project without any. It<br />
contains a summary row for the project as a whole, and then one summary row for each subproject.<br />
<br />
<br />
=====Organizing and defining subprojects=====<br />
<br />
To add subproject organization to your project, you must: (1) define the subprojects for CDash, so that it knows how to display them properly and (2) use build scripts with CTest to submit subproject builds of your project. Some (re-)organization of your project's CMakeLists.txt files may also be necessary to allow building of your project by subprojects.<br />
<br />
<<Figure 10.12: Main project page with subprojects>><br />
<br />
There are two ways to define subprojects and their dependencies: interactively in the CDash GUI when logged in as a project administrator, or by submitting a Project.xml file describing the subprojects and dependencies.<br />
<br />
<br />
=====Adding Subprojects Interactively=====<br />
<br />
As a project administrator, a "Manage subprojects" button will appear for each of your projects on the My CDash page. Clicking the Manage Subprojects button opens the manage subproject page, where you may add new subprojects or establish dependencies between existing subprojects for any project that you are an administrator of. There are two tabs on this page: one for viewing the current subprojects along with their dependencies, and one for creating new subprojects.<br />
<br />
To add subprojects, for instance two subprojects called Exes and Libs, and to make Exes depend on Libs, the following steps are necessary :<br />
<br />
* Click the "Add a subproject" tab.<br />
* Type "Exes" in the "Add a subproject" edit field.<br />
* Click the "Add subproject" button.<br />
* Click the "Add a subproj ect" tab.<br />
* Type "Libs" in the "Add a subproject" edit field.<br />
* Click the "Add Subproject" button .<br />
* In the "Exes" row of the "Current Subprojects" tab, choose "Libs" from the "Add dependency" drop downlist and click the "Add dependency" button.<br />
<br />
To remove a dependency or a subproject, click on the "X" next to the item you wish to delete.<br />
<br />
<br />
=====Adding Subprojects Automatically=====<br />
<br />
Another way to define CDash subprojects and their dependencies is to submit a "Project.xml" file along with the usual submission files that CTest sends when it submits a build to CDash. To define the same two subprojects as in the interactive example above (Exes and Libs) with the same dependency (Exes depend on Libs), the Project.xml file would look like the following example:<br />
<br />
<syntaxhighlight lang="text"><br />
<Project name="Tutorial"><br />
<SubProject name="Libs"></SubProject><br />
<SubProject name="Exes"><br />
<Dependency name="Libs"><br />
</SubProject><br />
</Project><br />
</syntaxhighlight><br />
<br />
Once the Project.xml file is written or generated, it can be submitted to CDash from a ctest -S script using the new FILES argument to the ctest_submit command, or directly from the ctest command line in a build tree configured for dashboard submission.<br />
<br />
From inside a ctest -S script:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest_submit(FILES "${CTEST_BINARY_DIRECTORY}/Project.xml")<br />
</syntaxhighlight><br />
<br />
From the command line:<br />
<br />
<syntaxhighlight lang="text"><br />
cd ../Project-build<br />
ctest --extra-submit Project.xml<br />
</syntaxhighlight><br />
<br />
CDash will automatically add subprojects and dependencies according to the Project.xml file. CDash will also remove any subprojects or dependencies not defined in the Project.xml file. Additionally, if the same Project.xml is submitted multiple times, the second and subsequent submissions will have no observable effect: the first submission adds/modifies the data, the second and later submissions send the same data, so no changes are necessary. CDash tracks changes to the subproject definitions over time to allow for projects to evolve. If you view dashboards from a past date, CDash will present the project/subproject views according to the subproject definitions in effect on that date.<br />
<br />
<br />
====Using ctest_submit with PARTS and FILES====<br />
<br />
In CTest version 2.8 and later, the ctest_submit() (page 354) command supports new PARTS and FILES arguments. With PARTS, you can send any subset of the xml files with each ctest_submit call. Previously, all parts would b e sent with any call to ctest_submit. Typically, the script would wait until all dashboard stages were complete and then call ctest_submit once to send the results of all stages at the end of the run. Now, a script may call ctest_submit with PARTS to do partial submissions of subsets of the results. For example, you can submit configure results after ctest_configure() (page 352), build results after ctest_build() (page 351), and test results after ctest_test() (page 355). This allows for information to be posted as the builds progress.<br />
<br />
With FILES, you can send arbitrary XML files to CDash. In addition to the standard build result XML files that CTest sends, CDash also handles the new Project.xml file that describes subprojects and dependencies. Prior to the addition of the ctest_submit PARTS handling, a typical dashboard script would contain a single ctest_submit() call on its last line<br />
<br />
<syntaxhighlight lang="text"><br />
ctest_start (Experimental)<br />
ctest_update (SOURCE "${CTEST_SOURCE_DIRECTORY}")<br />
ctest_configure (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_build (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_test (BUILD "${CTEST_BINARY_ DIRECTORY}")<br />
ctest_submit ()<br />
</syntaxhighlight><br />
<br />
Now, submissions can occur incrementally, with each part of the submission sent piecemeal as it becomes available:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest_start (Experimental)<br />
ctest_update (SOURCE "${CTEST_SOURCE_DIRECTORY}")<br />
ctest_configure (BUILD "${CTEST_BINARY DIRECTORY}")<br />
ctest_submit (PARTS Update Configure Notes)<br />
<br />
ctest_build (BUILD "*${CTEST_BINARY_DIRECTORY}" APPEND)<br />
ctest_submit (PARTS Build)<br />
<br />
ctest_test (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_submit (PARTS Test)<br />
</syntaxhighlight><br />
<br />
Submitting incrementally by parts means that you can inspect the results of the configure stage live on the CDash dashboard while the build is still in progress. Likewise, you can inspect the results of the build stage live while the tests are still running. When submitting by parts, it's important to use the APPEND keyword in the ctest_build command. If you don't use APPEND, then CDash will erase any existing build with the same build name, site name, and build stamp when it receives the Build.xml file.<br />
<br />
<br />
====Splitting Your Project into Multiple Subprojects====<br />
<br />
One ctest_build() (page 351) invocation that builds everything, followed by one ctest_test() (page 355) invocation that tests everything is sufficient for a project that has no subprojects, but if you want to submit results on a per-subproject basis to CDash, you will have to make some changes to your project and test scripts. For your project you need to identify what targets are part of what sub-projects. If you organize your CMakeLists files such that you have a target to build for each subproject, and you can derive (or look up) the name of that target based on the subproject name, then revising your script to separate it into multiple smaller configure/build/test chunks should be relatively painless. To do this, you can modify your CMakeLists files in various ways depending on your needs. The most common changes are listed below.<br />
<br />
<br />
=====CMakelists.txt modifications=====<br />
<br />
* Name targets the same as subprojects, base target names on subproject names, or provide a look up mechanism to map from subproject name to target name.<br />
* Possibly add custom targets to aggregate existing targets into subprojects, using add_dependencies to say which existing targets the custom target depends on.<br />
* Add the LABELS target property to targets with a value of the subproject name.<br />
* Add the LABELS test property to tests with a value of the subproject name.<br />
<br />
Next, you need to modify your CTest scripts that run your dashboards. To split your one large monolithic<br />
build into smaller subproject builds, you can use a foreach loop in your CTest driver script. To help you<br />
iterate over your subprojects, CDash provides a variable named CTEST_PROJECT_SUBPROJECTS in<br />
CTestConfig.cmake. Given the above example, CDash produces a variable like this:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CTEST_PROJECT_SUBPROJECTS Libs Exec)<br />
</syntaxhighlight><br />
<br />
CDash orders the elements in this list such that the independent subprojects (that do not depend on any other subprojects) are first, followed by subprojects that depend only on the independent subprojects, and after that subprojects that depend on those. The same logic continues until all subprojects are listed exactly once in this list in an order that makes sense for building them sequentially, one after the other.<br />
<br />
To facilitate building just the targets associated with a subproject, use the variable named CTEST_BUILD_TARGET to tell:command:ctest_build what to build. To facilitate running just the tests as sociated with a subproject, assign the LABELS test property to your tests and use the new INCLUDE_LABEL argument toctest_test() (page 355).<br />
<br />
<br />
=====ctest driver script modifications=====<br />
<br />
* Iterate over the subprojects in dependency order (from independent to most dependent...).<br />
* Set the SubProject and Label global properties - CTest uses these properties to submit the results to the correct subproject on the CDash server.<br />
* Build the target(s) for this subproject: compute the name of the target to build from the subproject name, set CTEST_BUILD_TARGET, call ctest_build.<br />
* Run the tests for this subproject using the INCLUDE or INCLUDE_LABEL arguments to ctest_ctest.<br />
* Use ctest_submit with the PARTS argument to submit partial results as they complete.<br />
<br />
<br />
To illustrate this, the following example shows the changes required to split a build into smaller pieces. Assume that the subproject name is the same as the target name required to build the subproject's components. For example, here is a snippet from CMakeLists.txt, in the hypothetical Tutorial project. The only additions necessary (since the target names are the same as the subproject names) are the calls to set_property() (page 329) for each target and each test.<br />
<br />
<syntaxhighlight lang="text"><br />
# "Libs" is the library name (therefore a target name) and<br />
# the subproject name<br />
add_library (Libs ...)<br />
set_property (TARGET Libs PROPERTY LABELS Libs)<br />
add_test (LibsTest1 ...)<br />
add_test (LibsTest2 ...)<br />
set_property (TEST LibsTest1 LibsTest2 PROPERTY LABELS Libs)<br />
<br />
# "Exes" is the executable name (therefore a target name)<br />
# and the subproject name<br />
add_executable (Exes ...)<br />
target_link_libraries (Exes Libs)<br />
set_property (TARGET Exes PROPERTY LABELS Exes)<br />
add_test (ExesTest1 ...)<br />
add_test (ExesTest2 ...)<br />
set_property (TEST ExesTest1 ExesTest2 PROPERTY LABELS Exes)<br />
</syntaxhighlight><br />
<br />
Here is an example of what the CTest driver script might look like before and after organizing this project into subprojects. Before the changes<br />
<br />
<syntaxhighlight lang="text"><br />
ctest_start (Experimental)<br />
ctest_update (SOURCE "${CTEST_SOURCE_DIRECTORY}")<br />
ctest_configure (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
# builds *all* targets: Libs and Exes<br />
ctest_build (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
</syntaxhighlight><br />
<br />
After the changes:<br />
<br />
<syntaxhighlight lang="text"><br />
ctest_start (Experimental)<br />
ctest_update (SOURCE "${CTEST_SOURCE_DIRECTORY}")<br />
ctest_submit (PARTS Update Notes)<br />
<br />
# to get CTEST_PROJECT_SUBPROJECTS definition:<br />
include ("${CTEST_SOURCE_DIRECTORY}/CTestConfig.cmake")<br />
foreach (subproject ${CTEST_PROJECT_SUBPROJECTS})<br />
set_property (GLOBAL PROPERTY SubProject ${subproject})<br />
set_property (GLOBAL PROPERTY Label ${subproject})<br />
<br />
ctest_configure (BUILD "${CTEST_BINARY_DIRECTORY}")<br />
ctest_submit (PARTS Configure)<br />
<br />
set (CTEST_BUILD_TARGET "${subproject}")<br />
ctest_buiid (BUILD "${CTEST_BINARY_DIRECTORY}" APPEND)<br />
# builds target ${CTEST_BUILD_TARGET}<br />
ctest_submit (PARTS Build)<br />
ctest_test (BUILD "${CTEST_BINARY_DIRECTORY}"<br />
INCLUDE_LABEL "${subproject}"<br />
)<br />
<br />
# runs only tests that have a LABELS property matching<br />
# "${subproject}"<br />
ctest_submit (PARTS Test)<br />
endforeach ()<br />
</syntaxhighlight><br />
<br />
In some projects, more than one ctest_build step may be required to build all the pieces of the subproject. For example, in Trilinos, each subproject builds the ${subproject}_libs target, and then builds the all target to build all the configured executables in the test suite. They also configure dependencies such that only the executables that need to be built for the currently configured packages build when the all target is built.<br />
<br />
Normally, if you submit multiple Build.xml files to CDash with the same exact build stamp, it will delete the existing entry and add the new entry in its place. In the case where multiple ctest_build steps are required, each with their own ctest_submit (PARTS Build) call, use the APPEND keyword argument in all of the ctest_build calls that belong together. The APPEND flag tells CDash to accumulate the results from multiple submissions and display the aggregation of all of them in one row on the dashboard. From CDash's perspective, multiple ctest_build calls (with the same build stamp and subproject and APPEND turned on) result in a single CDash build.<br />
<br />
Adopt some of these tips and techniques in your favorite CMake-based project:<br />
<br />
* LABELS is a new CMake/CTest property that applies to source files, targets and tests. Labels are sent to CDash inside the resulting xml files.<br />
* Use ctest_submit (PARTS) to do incremental submissions. Results are available for viewing on the dashboards sooner. Don't forget to use APPEND in your ctest_build calls when submitting by parts.<br />
* Use INCLUDE_LABEL with ctest_test to run only the tests with labels that match the regular expression.<br />
* Use CTEST_BUILD_TARGET to build your subprojects one at a time, submitting subproject dash boards along the way.<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:MastringCmakeVersion31]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=MastringCmakeVersion31:Chapter_09&diff=5608MastringCmakeVersion31:Chapter 092020-09-21T12:04:46Z<p>Onionmixer: 오타 수정</p>
<hr />
<div>==CHAPTER NINE::PACKAGING WITH CPack==<br />
<br />
CPack is a powerful, easy to use, cross-platform software packaging tool distributed with CMake since version 2.4.2. It uses the generators concept from CMake to abstract package generation on specific platforms. It can be used with or without CMake, but it may depend on some software being installed on the system. Using a simple configuration file, or using a CMake module, the author of a project can package a complex project into a simple installer. This chapter will describe how to apply CPack to a CMake project.<br />
<br />
<br />
===CPack Basics===<br />
<br />
Users of your software may not always want to, or be able to, build the software in order to install it. The software may be closed source, or it may take a long time to compile, or in the case of an end user application, the users may not have the skill or the tools to build the application. For these cases, what is needed is a way to build the software on one machine, and then move the install tree to a different m achine. The most basic way to do this is to use the DESTDIR environment variable to install the software into a temporary location, then to tar or zip up that directory and move it to another machine. However, the DESTDIR approach falls short on Windows, simply because path names typically start with a drive letter (C:/) and you cannot simply prefix one full path with another and get a valid path name. Another more powerful approach is to use CPack, included in CMake.<br />
<br />
CPack is a tool included with CMake, it can be used to create installers and packages for projects. CPack can create two basic types of packages, source and binary. CPack works in much the same way as CMake does for building software. It does not aim to replace native packaging tools, rather it provides a single interface to a variety of tools. Currently CPack supports the creation of Windows installers using NullSoft installer NSIS, Mac OS X PackageMaker tool, OS X Drag and Drop, OS X X11 Drag and Drop, Cygwin Setup packages, Debian packages, RPMs, .tar.gz, .sh (self extracting .tar.gz files), and .zip compressed files. The implementation of CPack works in a similar way to CMake. For each type of packaging tool supported, there is a CPack generator written in C++ that is used to run the native tool and create the package. For simple tar based packages, CPack includes a library version of tar and does not require tar to be installed on the system. For many of the other installers, native tools must be present for CPack to function.<br />
<br />
With source packages, CPack makes a copy of the source tree and creates a zip or tar file. For binary packages, the use of CPack is tied to the install commands working correctly for a project. When setting up install commands, the first step is to make sure the files go into the correct directory structure with the correct permissions. The next step is to make sure the software is relocatable and can run in an installed tree. This may require changing the software itself, and there are many techniques to do that for different environments that go beyond the scope of this book. Basically, executables should be able to find data or other files using relative paths to the location of where it is installed. CPack installs the software into a temporary directory, and copies the install tree into the format of the native packaging tool. Once the install commands have been added to a project, enabling CPack in the simplest case is done by including the CPack.cmake file into the project.<br />
<br />
<br />
====Simple Example====<br />
<br />
The most basic CPack project would look like this<br />
<br />
<syntaxhighlight lang="text"><br />
project(CoolStuff)<br />
add_executable(coolstuff coolstuff.cxx)<br />
install(TARGETS coolstuff RUNTIME DESTINATION bin)<br />
include(CPack)<br />
</syntaxhighlight><br />
<br />
In the CoolStuff project, an executable is created and installed into the directory bin. Then the CPack file is included by the project. At this point project CoolStuff will have CPack enabled. To run CPack for a CoolStuff, you would first build the project as you would any other CMake project. CPack adds several targets to the generated project. These targets in Makefiles are package and package_source, and PACKAGE in Visual Studio and Xcode. For example, to build a source and binary package for Cool Stuff using a Makefile generator you would run the following commands:<br />
<br />
<syntaxhighlight lang="text"><br />
mkdir build<br />
cd build<br />
cmake ../CoolStuff<br />
make<br />
make package<br />
make package_source<br />
</syntaxhighlight><br />
<br />
This would create a source zip file called CoolStuff-0.1.1-Source.zip, a NSIS installer called CoolStuff-0.1.1-win32.exe, and a binary zip file CoolStuff-0.1.1-win32.zip. The same thing could be done using the CPack command line.<br />
<br />
<syntaxhighlight lang="text"><br />
cd build<br />
cpack -C CPackConfig.cmake<br />
cpack -C CPackSourceConfig.cmake<br />
</syntaxhighlight><br />
<br />
<br />
====What Happens When CPack.cmake Is Included?====<br />
<br />
When the include(CPack) command is executed, the CPack.cmake file is included into the project. By default this will use the configure_file command to create CPackConfig.cmake and CPackSourceConfig.cmake in the binary tree of the project. These files contain a series of set commands that set variables for use when CPack is run during the packaging step. The names of the files that are configured by the CPack.cmake file can be customized with these two variables; CPACK_OUTPUT_CONFIG_FILE which defaults to CPackConfig.cmake and CPACK_SOURCE_OUTPUT_CONFIG_FILE which defaults to CPackSourceConfig.cmake.<br />
<br />
The source for these files can be found in the Templates/CPackConfig.cmake.in. This file contains some comments, and a single variable that is set by CPack.cmake. The file contains this line of CMake code:<br />
<br />
<syntaxhighlight lang="text"><br />
@_CPACK_OTHER_VARIBLES_@<br />
</syntaxhighlight><br />
<br />
If the project contains the file CPackConfig.cmake.in in the top level of the source tree, that file will be used instead of the file in the Templates directory. If the project contains the file CPackSourceConfig.cmake.in, then that file will be used for the creation of CPackSourceConfig.cmake.<br />
<br />
The configuration files created by CPack.cmake will contain all the variables that begin with "CPACK_" in the current project. This is done using the command<br />
<br />
<syntaxhighlight lang="text"><br />
get_cmake_property(res VARIABLES)<br />
</syntaxhighlight><br />
<br />
The above command gets all variables defined for the current CMake project. Some CMake code then looks for all variables starting with "CPACK_", and each variable found is configured into the two configuration files as CMake code. For example, if you had a variable set like this in your CMake project<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_NAME "CoolStuff")<br />
</syntaxhighlight><br />
<br />
CPackConfig.cmake and CPackSourceConfig.cmake would have the same thing in them:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_NAME "CoolStuff")<br />
</syntaxhighlight><br />
<br />
It is important to keep in mind that CPack is run after CMake on the project. CPack uses the same parser as CMake, but will not have the same variable values as the CMake project. It will only have the variables that start with CPACK_, and these variables will be configured into a configuration file by CMake. This can cause some errors and confusion if the values of the variables use escape characters. Since they are getting parsed twice by the CMake language, they will need double the level of escaping. For example, if you had the following in your CMake project:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_VENDOR "Cool \"Company\"")<br />
</syntaxhighlight><br />
<br />
The resulting CPack files would have this:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_VENDOR "Cool "Company"")<br />
</syntaxhighlight><br />
<br />
That would not be exactly what you would want or expect. In fact, it just wouldn't work. To get around this problem, there are two solutions. The first is to add an additional level of escapes to the original set command like this:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_VENDOR "Cool \\\"Company\\\"")<br />
</syntaxhighlight><br />
<br />
This would result in the correct set command which would look like this:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_VENDOR "Cool \"Company\"")<br />
</syntaxhighlight><br />
<br />
The second solution to the escaping problem is to use a CPack project config file, explained in the next section.<br />
<br />
<br />
====Adding Custom CPack Options====<br />
<br />
To avoid the escaping problem a project specific CPack configure file can be specified. This file will be loaded by CPack after CPackConfig.cmake or CPackSourceConfig.cmake is loaded, and CPACK_GENERATOR will be set to the CPack generator being run. Variables set in this file only require one level of CMake escapes. This file can be configured or not, and contains regular CMake code. In the example above, you could move CPACK_FOOBAR into a file MyCPackOptions.cmake. in and configure that file into the build tree of the project. Then set the project configuration file path like this:<br />
<br />
<syntaxhighlight lang="text"><br />
configure_file ("${PROJECT_SOURCE_DIR} /MyCPackOptions.cmake.in"<br />
"$(PROJECT_BINARY_DIR} /MyCPackOptions.cmake"<br />
@ONLY)<br />
set (CPACK_PROJECT_CONFIG_FILE<br />
"${PROJECT_BINARY_DIR}/CMakeCPackOptions.cmake")<br />
</syntaxhighlight><br />
<br />
Where MyCPackOptions.cmake.in contained:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_VENDOR "Cool \"Company\"")<br />
</syntaxhighlight><br />
<br />
The CPACK_PROJECT_CONFIG_FILE variable should contain the full path to the CPack config file for the project, as seen in the above example. This has the added advantage that the CMake code can contain if statements based on the CPACK_GENERATOR value, so that packager specific values can be set for a project. For example, the CMake project sets the icon for the installer in this file:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_NSIS_MUI_ICON<br />
"@CMake_SOURCE_DIR@/Utilities/Release\\CMakeLogo.ico")<br />
</syntaxhighlight><br />
<br />
Note that the path has forward slashes except for the last part which has an escaped as the path separator. As of the writing of this book, NSIS needed the last part of the path to have a Windows style slash. If you do not do this, you may get the following error:<br />
<br />
<syntaxhighlight lang="text"><br />
File: ".../Release/CMakeLogo.ico" -> no files found.<br />
Usage: File [/nonfatal] [/a] ([/r] [/x filespec [...]]<br />
filespec [...] | /oname=outfile one_file_only)<br />
</syntaxhighlight><br />
<br />
<br />
====Options Added by CPack====<br />
<br />
In addition to creating the two configuration files, CPack.cmake will add some advanced options to your project. The options added depend on the environment and OS that CMake is running on, and control the default packages that are created by CPack. These options are of the form CPACK_<CPackGeneratorName>, where generator names available o n each platform can be found in the following table:<br />
<br />
<<Figure platform table>><br />
<br />
Turning these options on or off affects the packages that are created when running CPack with no options. If<br />
the option is off in the CMakeCache.txt file for the project, you can still build that package type by specifying<br />
the -G option to the CPack command line.<br />
<br />
<br />
===CPack Source Packages===<br />
<br />
Source packages in CPack simply copy the entire source tree for a project into a package file, and no install rules are used as they are in the case of binary packages. Out of source builds should be used to avoid having extra binary stuff polluting the source package. If you have files or directories in your source tree that are not wanted in the source package, you can use the variable CPACK_SOURCE_IGNORE_FILES to exclude things from the package. This variable contains a list of regular expressions. Any file or directory that matches a regular expression in that list will be excluded from the sources. The default setting is as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
"/CVS/;/\\\\\\\\.svn/;\\\\\\\\/swp$;\\\\\\\\/#;/#"<br />
</syntaxhighlight><br />
<br />
There are many levels of escapes used in the default value as this variable is parsed by CMake once and CPack again. It is important to realize that the source tree will not use any install commands, it will simply copy the entire source tree minus the files it is told to ignore into the package. To avoid the multiple levels of escape, the file referenced by CPACK_PROJECT_CONFIG_FILE should be used to set this variable. The expression is a regular expression and not a wild card statement, see Chapter 4 for more information about CMake regular expressions.<br />
<br />
<br />
===CPack Installer Commands===<br />
<br />
Since binary packages require CPack to interact with the install rules of the project being packaged, this section will cover some of the options CPack provides to interact with the install rules of a project. CPack can work with CMake's install scripts or with external install commands.<br />
<br />
====CPack and CMake install commands====<br />
<br />
In most CMake projects, using the CMake install rules will be sufficient to create the desired package. By default CPack will run the install rule for the current project. However, if you have a more complicated project, you can specify sub-projects and install directories with the variable CPACK_INSTALL_CMAKE_PROJECTS. This variable should hold quadruplets of install directory, install project name, install component, and install subdirectory. For example, if you had a project with a sub project called MySub that was compiled into a directory called SubProject, and you wanted to install all of its components, you would have this:<br />
<br />
<syntaxhighlight lang="text"><br />
SET (CPACK_INSTALL_CMAKE_PROJECTS "SubProject;MySub;ALL;/")<br />
</syntaxhighlight><br />
<br />
<br />
====CPack and DESTDIR====<br />
<br />
By default CPack does not use the DESTDIR option during the installation phase. Instead it sets the CMAKE_INSTALL_PREFIX to the full path of the temporary directory being used by CPack to stage the install package.<br />
This can be changed by setting CPACK_SET_DESTDIR (page 682) to on. If the CPACK_SET_DESTDIR option is on, CPack will use the project's cache value for CPACK_INSTALL_PREFIX, and set DESTDIR to the temporary staging area. This allows absolute paths to be installed under the temporary directory. Relative paths are installed into DESTDIR/${project's CMAKE_INSTALL_PREFIX} where DESTDIR is set to the temporary staging area.<br />
<br />
As noted earlier, the DESTDIR approach does not work when the install rules reference files by Windows full paths beginning with drive letters (C:/).<br />
<br />
When doing a non-DESTDIR install for packaging, which is the default, any absolute paths are installed into absolute directories, and not into the package. Therefore, projects that do not use the DESTDIR option, must not use any absolute paths in install rules. Conversely, projects that use absolute paths, must use the DESTDIR option.<br />
<br />
One other variable can be used to control the root path projects are installed into, the CPACK_PACKAGING_INSTALL_PREFIX (page 682). By default many of the generators install into the directory /usr. That variable can be used to change that to any directory, including just /.<br />
<br />
<br />
====CPack and other installed directories====<br />
<br />
It is possible to run other install rules if the project is not CMake based. This can be done by using the variables CPACK_INSTALL_COMMANDS, and CPACK_INSTALLED_DIRECTORIES. CPACK_INSTALL_COMMANDS are commands that will be run during the installation phase of the packaging. CPACK_INSTALLED_DIRECTORIES should contain pairs of directory and subdirectory. The subdirectory can be '.' to be installed in the top-level directory of the installation. The files in each directory will be copied to the corresponding subdirectory of the CPack staging directory and packaged with the rest of the files.<br />
<br />
<br />
===CPack for Windows Installer NSIS===<br />
<br />
To create Windows style wizard based installer programs, CPack uses NSIS (NullSoft Scriptable Install System). More information about NSIS can be found at the NSIS home page: http://nsis.sourceforge.net/ NSIS is a powerful tool with a scripting language used to create professional Windows installers. To create Windows installers with CPack, you will need NSIS installed on your machine.<br />
<br />
CPack uses configured template files to control NSIS. There are two files configured by CPack during the creation of a NSIS installer. Both files are found in the CMake Modules directory. Modules/NSIS.template.in is the template for the NSIS script, and Modules/NSIS.InstallOptions.ini.in is the template for the modern user interface or MUI used by NSIS. The install options file contains the information about the pages used in the install wizard. This section will describe how to configure CPack to create an NSIS install wizard.<br />
<br />
<br />
====CPack Variables Used by CMake for NSIS====<br />
<br />
This section contains screen captures from the CMake NSIS install wizard. For each part of the installer that can be changed or controlled from CPack, the variables and values used are given.<br />
<br />
The first thing that a user will see of the installer in Windows is the icon for the installer executable itself. By default the installer will have the Null Soft Installer icon, as seen in Figure 9.1 for the 20071023 CMake installer. This icon can be changed by setting the variable CPACK_NSIS_MUI_ICON. The installer for 20071025 in the same figure shows the CMake icon being used for the installer.<br />
<br />
<<figure 9.1>><br />
<br />
The last thing a users will see of the installer in Windows is the icon for the uninstall executable, as seen in Figure 9.2. This option can be set with the CPACK_NSIS_MUI_UNIICON variable. Both the install and uninstall icons must be the same size and format, a valid windows .ico file usable by Windows Explorer. The icons are set like this:<br />
<br />
<syntaxhighlight lang="text"><br />
# set the install/uninstall icon used for the installer itself<br />
set (CPACK_NSIS_MUI_ICON<br />
"${CMake_SOURCE_DIR}/Utilities/Release\\CMakeLogo.ico")<br />
set (CPACK_NSIS_MUI_UNIICON<br />
"${CMake_SOURCE_DIR}/Utilities/Release\\CMakeLogo.ico")<br />
</syntaxhighlight><br />
<br />
<<figure 9.2>><br />
<br />
On Windows, programs can also be removed using the Add or Remove Programs tool from the control panel as seen in Figure 9.3. The icon for this should be embedded in one of the installed executables. This can be set like this:<br />
<br />
<syntaxhighlight lang="text"><br />
# set the add/remove programs icon using an installed executable<br />
SET(CPACK_NSIS_INSTALLED_ICON_NAME "bin\\cmake-gui.exe"<br />
</syntaxhighlight><br />
<br />
<<figure 9.3>><br />
<br />
<<figure 9.4>><br />
<br />
When running the installer, the first screen of the wizard will look like Figure 9.4. In this screen you can control the name of the project that shows up in two places on the screen. The name used for the project is controlled by the variable CPACK_PACKAGE_INSTALL_DIRECTORY or CPACK_NSIS_PACKAGE_NAME. In this example, it was set to "CMake 2.5'' like this:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_INSTALL_DIRECTORY "CMake<br />
${CMake_VERSION_MAJOR}.${CMake_VERSION_MINOR}")<br />
<br />
set (CPACK_NSIS_PACKAGE_NAME "CMake<br />
${CMake_VERSION_MAJOR}.${CMake_VERSION_MINOR}") <br />
</syntaxhighlight><br />
<br />
<<figure 9.5>><br />
<br />
The second page of the install wizard can be seen in Figure 9.5. This screen contains the license agreement and there are several things that can be configured on this page. The banner bitmap to the left of the "License Agreement" label is controlled by the variable CPACK_PACKAGE_ICON like this:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_ICON<br />
"$(CMake_SOURCE_DIR)/Utilities/Release\\CMakeInstall.bmp")<br />
</syntaxhighlight><br />
<br />
CPACK_PACKAGE_INSTALL_DIRECTORY is used again on this page everywhere you see the text "CMake 2.5". The text of the license agreement is set to the contents of the file specified in the CPACK_RESOURCE_FILE_LICENSE variable. CMake does the following:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_RESOURCE_FILE_LICENSE<br />
"$(CMAKE_CURRENT_SOURCE_DIR)/Copyright.txt")<br />
</syntaxhighlight><br />
<br />
<<figure 9.6>><br />
<br />
The third page of the installer can be seen in Figure 9.6. This page will only show up if CPACK_NSIS_MODIFY_PATH is set to on. If you check the Create "name" Desktop Icon button, and you put executable names in the variable CPACK_CREATE_DESKTOP_LINKS, then a desktop icon for those executables will be created. For example, to create a desktop icon for the cmake-gui program of CMake, the following is done:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_CREATE_DESKTOP_LINKS cmake-gui)<br />
</syntaxhighlight><br />
<br />
Multiple desktop links can be created if your application contains more than one executable. The link will be created to the Start Menu entry, so CPACK_PACKAGE_EXECUTABLES, which is described later in this section, must also contain the application in order for a desktop link to be created.<br />
<br />
<<figure 9.7>><br />
<br />
The fourth page of the installer seen in Figure 9.7 uses the variable CPACK_PACKAGE_INSTALL_DIRECTORY to specify the default destination folder in Program Files. The following CMake code was used to set that default:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_INSTALL_DIRECTORY "CMake<br />
${CMake_VERSION_MAJOR}.${CMake_VERSION_MINOR}")<br />
</syntaxhighlight><br />
<br />
The remaining pages of the installer wizard do not use any additional CPack variables, and are not included in this section. Another important option that can be set by the NSIS CPack generator is the registry key used. There are several CPack variables that control the default key used. The key is defined in the NSIS.template.in file as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
!define MUI_STARTMENUPAGE_REGISTRY_KEY<br />
"Software\@CPACK_PACKAGE_VENDOR@\@CPACK_PACKAGE_INSTALL_REGISTRY_KEY@"<br />
</syntaxhighlight><br />
<br />
Where the CPACK_PACKAGE_VENDOR value defaults to Humanity, and CPACK_PACKAGE_INSTALL_REGISTRY_KEY defaults to ${CPACK_PACKAGE_NAME} ${CPACK_PACKAGE_VERSION}<br />
<br />
So for CMake 2.5.20071025 the registry key would look like this:<br />
<br />
<syntaxhighlight lang="text"><br />
HKEY_LOCAL_MACHINE\SOFTWARE\Kitware\CMake 2.5.20071025<br />
</syntaxhighlight><br />
<br />
<br />
====Creating Windows Short Cuts in the Start Menu====<br />
<br />
There are two variables that control the short cuts that are created in the Windows Start menu by NSIS. The<br />
variables contain lists of pairs, and must have an even number of elements to work correctly. The first is<br />
CPACK_P AC KAGE_E X E C U T AB L E S , it should contain the name of the executable file followed by the name<br />
of the shortcut text. For example in the case of CMake, the executable is called cmake-gui, but the shortcut<br />
is named "CMake". CMake does the following to create that shortcut:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_EXECUTABLES "cmake-gui" "CMake")<br />
</syntaxhighlight><br />
<br />
The second is C P AC K_N S I S_MEN U_L I NK S . This variable contains arbitrary links into the install tree, or to<br />
external web pages. The first of the pair is always the existing source file or location, and the second is the<br />
name that will show up in the Start menu. To add a link to the help file for cmake-gui and a link to the CMake<br />
web page add the following:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_NSIS_MENU_LINKS<br />
"doc/cmake-${VERSION_MAJOR}.${VERSION_MINOR}/cmake-gui.html"<br />
"cmake-gui Help" "http://www.cmake.org" "CMake Web Site")<br />
</syntaxhighlight><br />
<br />
<br />
====Advanced NSIS CPack Options====<br />
<br />
In addition to the variables already discussed, CPack provides a few additional variables that are directly configured into the NSIS script file. These can be used to add NSIS script fragments to the final NSIS script used to create the installer. They are as follows:<br />
<br />
'''CPACK_NSIS_EXTRA_INSTALL_COMMANDS''' Extra commands used during install .<br />
<br />
'''CPACK_NSIS_EXTRA_UNINSTALL_COMMANDS''' Extra commands used during uninstall.<br />
<br />
'''CPACK_NSIS_CREATE_ICONS_EXTRA''' Extra NSIS commands in the icon section of the script.<br />
<br />
'''CPACK_NSIS_DELETE_ICONS_EXTRA''' Extra NSIS commands in the delete icons section of the script.<br />
<br />
When using these variables the NSIS documentation should be referenced, and the author should look at the NSIS.template.infile for the exact placement of the variables.<br />
<br />
<br />
====Setting File Extension Associations With NSIS====<br />
<br />
One example of a useful thing that can be done with the extra install commands is to create associations from file extensions to the installed application. For example, if you had an application CoolStuff that could open files with the extension .cool, you would set the following extra install and uninstall commands:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_NSIS_EXTRA_INSTALL_COMMANDS "<br />
WriteRegStr HKCR '.cool' '' 'CoolFile'<br />
WriteRegStr HKCR 'CoolFile' '' 'Cool Stuff File'<br />
WriteRegStr HKCR 'CoolFile\\shell' '' 'open'<br />
WriteRegStr HKCR 'CoolFile\\Defaulticon' \\<br />
'' '$INSTDIR\\bin\\coolstuff.exe,0'<br />
WriteRegStr HKCR 'CoolFile\\shell\\open\\command' \\<br />
'' '$INSTDIR\\bin\\coolstuff.exe \"%1\"'<br />
WriteRegStr HKCR \"CoolFile\\shell\\edit' \\<br />
'' 'Edit Cool File'<br />
WriteRegStr HKCR 'CoolFile\\shell\\edit\\command' \\<br />
'' '$INSTDIR\\bin\\coolstuff.exe \"%1\"'<br />
System::Call \\<br />
'Shell32::SHChangeNotify(i 0x8000000, i 0, i 0, i 0)'<br />
")<br />
<br />
set (CPACK_NSIS_EXTRA UNINSTALL COMMANDS "<br />
DeleteRegKey HKCR '.cool'<br />
DeleteRegKey HKCR 'CoolFile'<br />
")<br />
</syntaxhighlight><br />
<br />
This creates a Windows file association to all files ending in .cool, so that when a user double clicks on a .cool file, coolstuff.exe is run with the full path to the file as an argument. This also sets up an association for editing the file from the windows right-click menu to the same coolstuff.exe program. The Windows explorer icon for the file is set to the icon found in the coolstuff.exe executable. When it is uninstalled, the registry keys are removed. Since the double quotes and Windows path separators must be escaped, it is best to put this code into the CPACK_PROJECT_CONFIG_FILE for the project.<br />
<br />
<syntaxhighlight lang="text"><br />
configure_file(<br />
${CoolStuff_SOURCE_DIR}/CoolStuffCPackOptions.cmake.in<br />
${CoolStuff_BINARY_DIR}/CoolStuffCPackOptions.cmake @ONLY)<br />
<br />
set (CPACK_PROJECT_CONFIG_FILE<br />
${CoolStuff_BINARY_DIR}/CoolStuffCPackOptions.cmake)<br />
include (CPack)<br />
</syntaxhighlight><br />
<br />
<br />
====Installing Microsoft Run Time Libraries====<br />
<br />
Although not strictly an NSIS CPack command, if you are creating applications on Windows with the Microsoft compiler you will most likely want to distribute the run time libraries from Microsoft alongside your project. In CMake, all you need to do is the following:<br />
<br />
<syntaxhighlight lang="text"><br />
include (InstallRequiredSystemLibraries)<br />
</syntaxhighlight><br />
<br />
This will add the compiler run time libraries as install files that will go into the bin directory of your application. If you do not want the libraries to go into the bin directory, you would do this:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CMAKE_INSTALL_SYSTEM_RUNTIME_LIBS_SKIP TRUE)<br />
include (InstallRequiredSystemLibraries)<br />
install (PROGRAMS ${CMAKE_INSTALL_SYSTEM_RUNTIME_LIBS}<br />
DESTINATION mydir)<br />
</syntaxhighlight><br />
<br />
It is important to note that the run time libraries must be right next to the executables of your package in order for Windows to find them. With Visual Studio 2005 and 2008, side by side manifest files are also required to be installed with your application when distributing the run time libraries. If you want to package a debug version of your software you will need to set CMAKE_INSTALL_DEBUG_LIBRARIES to ON prior to the include. Be aware, however, that the license terms may prohibit you from re-distributing the debug libraries. Double check the licensing terms for the version of Visual Studio you're using before deciding to set CMAKE_INSTALL_DEBUG_LIBRARIES to ON.<br />
<br />
<br />
====CPack Component Install Support====<br />
<br />
By default, CPack's installers consider all of the files installed by a project as a single, monolithic unit: either the whole set of files is installed, or none of the files are installed. However, with many projects it makes sense for the installation to be subdivided into distinct, user-selectable components. Some users may want to install only the comand-line tools for a project, while other users might want the GUI or the header files.<br />
<br />
This section describes how to configure CPack to generate component-based installers that allow users to select the set of project components that they wish to install. As an example, a simple installer will be created for a library that has three components: a library binary, a sample application, and a C++ header file. When finished the resulting installers for Windows and Mac OS X look like the ones in Figure 9.8.<br />
<br />
<<figure 9.8: Mac and Windows Component Installers>><br />
<br />
The simple example we will be working with is as follows; it has a library and an executable. CPack commands that have already been covered are used.<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required (VERSION 2.6.0 FATAL_ERROR)<br />
project (MyLib)<br />
<br />
add_library(mylib mylib.cpp)<br />
<br />
add_executable(mylibapp mylibapp.cpp)<br />
target_link_libraries(mylibapp mylib)<br />
<br />
install (TARGETS mylib ARCHIVE DESTINATION lib)<br />
install (TARGETS mylibapp RUNTIME DESTINATION bin)<br />
install (FILES mylib.h DESTINATION include)<br />
<br />
# add CPack to project<br />
set (CPACK_ PACKAGE NAME "MyLib")<br />
set (CPACK_PACKAGE_VENDOR "CMake.org")<br />
set (CPACK_PACKAGE_DESCRIPTION_SUMMARY<br />
"MyLib - CPack Component Installation Example")<br />
set (CPACK_PACKAGE_VERSION "1.0.0")<br />
set (CPACK_PACKAGE VERSION MAJOR "1")<br />
set (CPACK_PACKAGE VERSION_MINOR "0")<br />
set (CPACK_PACKAGE VERSION_PATCH "0")<br />
set (CPACK_PACKAGE_INSTALL_DIRECTORY "CPack Component Example")<br />
<br />
# This must always be after all CPACK\_\* variables are defined<br />
include (CPack)<br />
</syntaxhighlight><br />
<br />
=====Specifying Com ponents=====<br />
<br />
The first step in building a component-based installation is to identify the set of installable components. In this example, three components will be created: the library binary, the application, and the header file. This decision is arbitrary and project-specific, but be sure to identify the components that correspond to units of functional ity important to your user, rather than basing the components on the internal structure of your program .<br />
<br />
For each of these components, we need to identify which component each of the installed files belong in. For each INSTALL command in CMakeLists.txt, add an appropriate COMPONENT argument stating which component the installed files will be associated with :<br />
<br />
<syntaxhighlight lang="text"><br />
install (TARGETS mylib<br />
ARCHIVE<br />
DESTINATION lib<br />
COMPONENT libraries)<br />
install (TARGETS mylibapp<br />
RUNTIME<br />
DESTINATION bin<br />
COMPONENT applications)<br />
install(FILES mylib.h<br />
DESTINATION include<br />
COMPONENT headers)<br />
</syntaxhighlight><br />
<br />
Note that the COMPONENT argument to the INSTALL command is not new; it has been a part of CMake's INSTALL command to allow installation of only part of a project. If you are using any of the older installation commands (INSTALL_TARGETS, INSTALL_FILES, etc.), you will need to convert them to INSTALL commands in order to use components.<br />
<br />
The next step is to notify CPack of the names of all of the components in your project by calling the cpack_add_component function for each component of the package:<br />
<br />
<syntaxhighlight lang="text"><br />
cpack_add_component(applications)<br />
cpack_add_component(libraries)<br />
cpack_add_component(headers)<br />
</syntaxhighlight><br />
<br />
At this point you can build a component-based installer with CPack that will allow one to independently install the applications, libraries, and headers of MyLib. The Windows and Mac OS X installers will look like the ones shown in Figure 9.9.<br />
<br />
<<figure 9.9:Windows and Mac OS X Component Installer First Page>><br />
<br />
<br />
=====Naming Components=====<br />
<br />
At this point, you may have noted that the names of the actual components in the installer are not very descriptive: they just say "applications," "libraries," or "headers," as specified in the component names. These names can be improved by using the DISPLAY_NAME option in the cpack_add_component function:<br />
<br />
<syntaxhighlight lang="text"><br />
cpack_add_component (applications DISPLAY_NAME<br />
"MyLib Application")<br />
cpack_add_component (libraries DISPLAY_NAME "Libraries")<br />
cpack_add_component (headers DISPLAY_NAME "C++ Headers")<br />
</syntaxhighlight><br />
<br />
Any macro prefixed with CPACK_COMPONENT_${COMPNAME}, where ${COMPNAME} is the uppercase name of a component, is used to set a particular property of that component in the installer. Here, we set the DISPLAY_NAME property of each of our components so that we get human-readable names. These names will be listed in the selection box rather than the internal component names "applications," "libraries," "headers,"<br />
<br />
<<figure 9.10: Windows and Mac OS X Installers with named components>><br />
<br />
<br />
=====Adding Component Descriptions=====<br />
<br />
There are several other properties associated with components, including the ability to make a component hidden, required, or disabled by default, that provide additional descriptive information. Of particular note is the DESCRIPTION property, which provides some descriptive text for the component. This descriptive text will show up in a separate "description" box in the installer, and will be updated either when the user's mouse hovers over the name of the corresponding component (Windows), or when the user clicks on a component (Mac OS X). We will add a description for each of our components below :<br />
<br />
<syntaxhighlight lang="text"><br />
cpack_add_component (applications DISPLAY_NAME "MyLib Application"<br />
DESCRIPTION<br />
"An extremely useful application that makes use of MyLib"<br />
)<br />
cpack_add_ component (libraries DISPLAY NAME "Libraries"<br />
DESCRIPTION<br />
"Static libraries used to build programs with MyLib"<br />
)<br />
cpack_add_component (headers DISPLAY_NAME "C++ Headers"<br />
DESCRIPTION<br />
"C/C++ header files for use with MyLib"<br />
)<br />
</syntaxhighlight><br />
<br />
Generally, descriptions should provide enough information for the user to make a decision on whether to install the component, but should not themselves be more than a few lines long (the "Description" box in the installers tends to be small). Figure 9.11 shows the description display for both the Windows and Mac OS X installers.<br />
<br />
<<figure 9.11: Component Installers with descriptions>><br />
<br />
<br />
=====Component Interdependencies=====<br />
<br />
With most projects the various components are not completely independent. For example, an application component may depend on the shared libraries in another component to execute properly, such that installing the application component without the corresponding shared libraries would result in an unusable installation. CPack allows you to express the dependencies between components, so that a component will only be installed if all of the other components it depends on are also installed.<br />
<br />
To illustrate component dependencies we will place a simple restriction on our component-based installer. Since we do not provide source code in our installer, the C++ header files we distribute can only actually be used if the user also installs the library binary to link their program against. Thus, the "headers" component depends on the availability of the "libraries" component. We can express this notion by setting the DEPENDS property for the HEADERS component as such :<br />
<br />
<syntaxhighlight lang="text"><br />
cpack_add_component (headers DISPLAY_NAME "C++ Headers"<br />
DESCRIPTION<br />
"C/C++ header files for use with MyLib"<br />
DEPENDS libraries<br />
)<br />
</syntaxhighlight><br />
<br />
The DEPENDS property for a component is actually a list, as such a component can depend on several other components. By expressing all of the component dependencies in this manner you can ensure that users will not be able to select an incomplete set of components at installation time.<br />
<br />
<br />
=====Grouping Components=====<br />
<br />
When the number of components in your project grows large, you may need to provide additional organization for the list of components. To help with this organization, CPack includes the notion of component groups. A component group is simply a way to provide a name for a group of related components. Within the user interface a component group has its own name, and underneath that group are the names of all of the components in that group. Users will have the option to (de-)select the installation of all components in the group with a single click, or expand the group to select individual components.<br />
<br />
We will expand our example by categorizing its three components, "applications," "libraries," and "headers," into "Runtime" and "Development" groups. We can place a component into a group by using the GROUP option to the cpack_add_component function as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
cpack_add_component (applications<br />
DISPLAY NAME "MyLib Application"<br />
DESCRIPTION<br />
"An extremely useful application that makes use of MyLib"<br />
GROUP Runtime)<br />
cpack_add_component (libraries<br />
DISPLAY_NAME "Libraries"<br />
DESCRIPTION<br />
"Static libraries used to build programs with MyLib"<br />
GROUP Development)<br />
cpack_add_component (headers<br />
DISPLAY_NAME "C++ Headers"<br />
DESCRIPTION "C/C++ header files for use with MyLib"<br />
GROUP Development<br />
DEPENDS libraries<br />
)<br />
</syntaxhighlight><br />
<br />
Like components, component groups have various properties that can be customized, including the DISPLAY_NAME and DESCRIPTION. For example, the following code adds an expanded description to the "Development" group:<br />
<br />
<syntaxhighlight lang="text"><br />
cpack_add_component_group(Development<br />
EXPANDED<br />
DESCRIPTION<br />
"All of the tools you'll ever need to develop software")<br />
</syntaxhighlight><br />
<br />
Once you have customized the component groups to your liking, rebuild the binary installer to see the new organization: the MyLib application will show up under the new "Runtime" group, while the MyLib library and C++ header will show up under the new "Development" group. One can easily turn on/off all of the components within a group using the installer's GUI. This can be seen in Figure 9.12.<br />
<br />
<br />
=====Installation Types (NSIS Only)=====<br />
<br />
<<Figure 9.12: Component Grouping>><br />
<br />
When a project contains a large number of components, it is common for a Windows installer to provide pre-selected sets of components based on specific user needs. For example, a user wanting to develop software against a library will want one set of components, while an end user might use an entirely different set. CPack supports this notion of pre-selected component sets via instal lation types. An installation type is simply a set of components. When the user selects an installation type, exactly that set of components is selected. Then the user is permitted to further customize their installation as desired. Currently this is only supported by the NSIS generator.<br />
<br />
For our simple example, we will create two installation types: a "Full" installation type that contains all of the components, and a "Developer" installation type that includes only the libraries and headers. To do this we use the function cpack_add_install_type to add the types.<br />
<br />
<syntaxhighlight lang="text"><br />
cpack_add_install_type(Full DISPLAY_NAME "Everything")<br />
cpack_add_install_type (Developer)<br />
</syntaxhighlight><br />
<br />
Next, we set the INSTALL_TYPES property of each component to state which installation types will include that component. This is done with the INSTALL_TYPES option to the cpack_add_component function.<br />
<br />
<syntaxhighlight lang="text"><br />
cpack_add_component (libraries DISPLAY_NAME "Libraries"<br />
DESCRIPTION<br />
"Static libraries used to build programs with MyLib"<br />
GROUP Development<br />
INSTALL_TYPES Developer Full)<br />
cpack_add_component (applications<br />
DISPLAY_NAME "MyLib Application"<br />
DESCRIPTION<br />
"An extremely useful application that makes use of MyLib"<br />
GROUP Runtime<br />
INSTALL_TYPES Full)<br />
cpack_add_component (headers<br />
DISPLAY_NAME "C++ Headers"<br />
DESCRIPTION "C/C++ header files for use with MyLib"<br />
GROUP Development<br />
DEPENDS libraries<br />
INSTALL_TYPES Developer Full)<br />
</syntaxhighlight><br />
<br />
Components can be listed under any number of installation types. If you rebuild the Windows installer, the components page will contain a combo box that allows you to select the installation type, and therefore its corresponding set of components as shown in Figure 9.13.<br />
<br />
<br />
=====Variables that control CPack components=====<br />
<br />
The functions cpack_add_install_type, cpack_add_component_group, and cpack_add_component just set CPACK_ variables. Those variables are described in the following list:<br />
<br />
'''CPACK_COMPONENTS_ALL''' This is a list containing the names of all components that should be installed by CPack. The presence of this macro indicates that CPack should build a component-based installer. Files associated with any components not listed here or any installation commands not associated with any component will not be installed.<br />
<br />
'''CPACK_COMPONENT_${COMPNAME}_DISPLAY_NAME''' The displayed name of the component ${COMPNAME}, used in graphical installers to display the component name. This value can be any string.<br />
<br />
'''CPACK_COMPONENT_${COMPNAME}_DESCRIPTION''' An extended description of the component ${COMPNAME}, used in graphical installers to give the user additional information about the component. Descriptions can span multiple lines using "n" as the line separator.<br />
<br />
'''CPACK_COMPONENT_${COMPNAME}_HIDDEN''' A flag that indicates that this component will be hidden in the graphical installer, and therefore cannot be selected or installed. Only avai lable with NSIS.<br />
<br />
'''CPACK_COMPONENT_${COMPNAME}_REQUIRED''' A flag that indicates that this component is required, and therefore will always be installed. It will be visible in the graphical installer but it cannot be unselected.<br />
<br />
'''CPACK_COMPONENT_${COMPNAME}_DISABLED''' A flag that indicates that this component should be disabled (unselected) by default. The user is free to select this component for installation.<br />
<br />
'''CPACK_COMPONENT_${COMPNAME}_DEPENDS''' Lists the components on which this component depends. If this component is selected, then each of the components listed must also be selected.<br />
<br />
'''CPACK_COMPONENT_${COMPNAME}_GROUP''' Names a component group that this component is a part of. If not provided, the component will be a standalone component, not part of any component group.<br />
<br />
'''CPACK_COMPONENT_${COMPNAME}_INSTALL_TYPES''' Lists the installation types that this component is a part of. When one of these installations types is selected, this component will automatically be selected. Only available with NSIS.<br />
<br />
'''CPACK_COMPONENT_GROUP_${GROUPNAME}_DISPLAY_NAME''' The displayed name of the component group ${GROUPNAME}, used in graphical installers to display the component group name. This value can be any string.<br />
<br />
'''CPACK_COMPONENT_GROUP_${GROUPNAME}_DESCRIPTION''' An extended description of the component group ${GROUPNAME}, used in graphical installers to give the user additional information about the components contained within this group. Descriptions can span multiple lines using "n" as the line separator.<br />
<br />
'''CPACK_COMPONENT_GROUP_${GROUPNAME}_BOLD_TITLE''' A flag indicating whether the group title should be in bold. Only available with NSIS.<br />
<br />
'''CPACK_COMPONENT_GROUP_${GROUPNAME}_EXPANDED''' A flag indicating whether the group should start out "expanded", showing its components. Otherwise only the group name itself will be shown until the user clicks on the group. Only available with NSIS .<br />
<br />
'''CPACK_INSTALL_TYPE_${INSTNAME}_DISPLAY_NAME The displayed name of the installation type. This value can be any string.<br />
<br />
<br />
===CPack for Cygwin Setup===<br />
<br />
Cygwin (http://www.cygwin.com/) is a Linux-like environment for Windows that consists of a run time DLL and a collection of tools. To add tools to the official cygwin, the cygwin setup program is used. The setup tool has very specific layouts for the source and binary trees that are to be included. CPack can create the source and binary tar files and correctly bzip them so that they can be uploaded to the cygwin mirror sites. You must of course have your package accepted by the cygwin community before that is done. Since the layout of the package is more restrictive than other packaging tools, you may have to change some of the install options for your project.<br />
<br />
The cygwin setup program requires that all files be installed into /usr/bin, /usr/share/package-version, /usr/share/man and /usr/share/doc/package-version. The cygwin CPack generator will automatically add the /usr to the install directory for the project. The project must install things into share and bin, and CPack will add the /usr prefix automatically.<br />
<br />
Cygwin also requires that you provide a shell script that can be used to create the package from the sources. Any cygwin specific patches that are required for the package must also be provided in a diff file. CMake's configure_file command can be used to create both of these files for a project. Since CMake is a cygwin package, the CMake code used to configure CMake for the cygwin CPack generators is as follows<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_NAME CMake)<br />
<br />
# setup the name of the package for cygwin<br />
set (CPACK_PACKAGE_FILE_NAME<br />
"${CPACK_PACKAGE_NAME}-${CMake_VERSION}")<br />
<br />
# the source has the same name as the binary<br />
set (CPACK_SOURCE_PACKAGE_FILE_NAME ${CPACK_PACKAGE_FILE_NAME})<br />
<br />
# Create a cygwin version number in case there are changes<br />
# for cygwin that are not reflected upstream in CMake<br />
set (CPACK_CYGWIN_PATCH_NUMBER 1)<br />
<br />
# if we are on cygwin and have cpack, then force the<br />
# doc, data and man dirs to conform to cygwin style directories<br />
set (CMAKE_DOC_DIR "/share/doc/${CPACK_PACKAGE_FILE_NAME}")<br />
set (CMAKE_DATA_DIR "/share/${CPACK_PACKAGE_FILE_NAME}")<br />
set (CMAKE_MAN_ DIR "/share/man")<br />
<br />
# These files are required by the cmCPackCygwinSourceGenerator and<br />
# the files put into the release tar files.<br />
set (CPACK_CYGWIN_BUILD_SCRIPT<br />
"${CMake_BINARY_DIR}/@CPACK_PACKAGE_FILE_NAME@-<br />
@CPACK_CYGWIN_PATCH_NUMBER@.sh")<br />
set (CPACK_CYGWIN_PATCH FILE<br />
"${CMake_BINARY_DIR}/@CPACK_PACKAGE_FILE_NAME@-<br />
@CPACK_CYGWIN_PATCH_NUMBER@.patch")<br />
<br />
# include the sub directory for cygwin releases<br />
include (Utilities/Release/Cygwin/CMakeLists.txt)<br />
<br />
# when packaging source make sure to exclude the .build directory<br />
set (CPACK_SOURCE_IGNORE_FILES<br />
"/CVS/" "/\\\\.build/" "/\\\\.svn/" "\\\\.swp$" "\\\\.#" "/#" "~$")<br />
</syntaxhighlight><br />
<br />
Utilities/Release/Cygwin/CMakeLists.txt:<br />
<br />
<syntaxhighlight lang="text"><br />
# create the setup-hint file for cygwin<br />
configure_file (<br />
"${CMake_SOURCE_DIR}/Utilities/Release/Cygwin/cygwin-setup.hint.in"<br />
"${CMake_BINARY_DIR}/setup.hint")<br />
<br />
configure_file (<br />
"${CMake_SOURCE_DIR}/Utilities/Release/Cygwin/README.cygwin.in"<br />
"${CMake_BINARY_DIR}/Docs/@CPACK_PACKAGE_FILE_NAME@-<br />
@CPACK_CYGWIN_PATCH_NUMBER@.README" )<br />
<br />
install_files (/share/doc/Cygwin FILES<br />
${CMake_BINARY_DIR}/Docs/@CPACK_PACKAGE_FILE_NAME@-<br />
@CPACK_CYGWIN_PATCH_NUMBER@.README)<br />
<br />
# create the shell script that can build the project<br />
configure_file (<br />
"${CMake_SOURCE_DIR}/Utilities/Release/Cygwin/cygwin-package.sh.in"<br />
${CPACK_CYGWIN_BUILD_SCRIPT})<br />
<br />
# Create the patch required for cygwin for the project<br />
configure_file (<br />
"${CMake_SOURCE_DIR}/Utilities/Release/Cygqwin/cygwin-patch.diff.in"<br />
${CPACK_CYGWIN_PATCH_FILE})<br />
</syntaxhighlight><br />
<br />
The file Utilities/Release/Cygwin/cygwin-package.sh.in can be found in the CMake source tree. It is a shell script that can be used to re-create the cygwin package from source. For other projects, there is a template install script that can be found in Templates/cygwin-package.sh.in. This script should be able to configure and package any cygwin based CPack project, and it is required for all official cygwin packages.<br />
<br />
Another important file for cygwin binaries is share/doc/Cygwin/package-version.README. This file should contain the information required by cygwin about the project. In the case of CMake, the file is configured so that it can contain the correct version information. For example, part of that file for CMake looks like this:<br />
<br />
<syntaxhighlight lang="text"><br />
Build instructions:<br />
unpack CMake-2.5.20071029-1-src.tar.bz2<br />
if you use setup to install this srCPackage, it will be<br />
unpacked under /usr/src automatically<br />
cd /usr/sre<br />
./CMake-2.5.20071029-1.sh all<br />
This will create:<br />
/usr/src/CMake-2.5.20071029.tar.bz22<br />
/usr/src/CMake-2.5.20071029-1-src.tar.bz2<br />
</syntaxhighlight><br />
<br />
<br />
===CPack for Mac OS X PackageMaker===<br />
<br />
On the Apple Mac OS X operating system, CPack provides the ability to use the system PackageMaker tool. This section will show the CMake application install screens users will see when installing the CMake package on OS X. The CPack variables set to change the text in the installer will be given for each screen of the installer.<br />
<br />
<<Figure 9.14: MaCPackage inside .dmg>><br />
<br />
In Figure 9.14, the .pkg file found inside the .dmg disk image created by the CPack package maker for Mac OS X is seen. The name of this file is controlled by the CPACK_PACKAGE_FILE_NAME variable. If this is not set, CPack will use a default name based on the package name and version settings.<br />
<br />
When the .pkg file is run, the package wizard starts with the screen seen in Figure 9.15. The text in this window is controlled by the file pointed to by the CPACK_RESOURCE_FILE_WELCOME variable.<br />
<br />
The figure above shows the read me section of the package wizard. The text for this window is customized by using the CPACK_RESOURCE_FILE_README variable. It should contain a path to the file containing the text that should be displayed on this screen.<br />
<br />
This figure contains the license text for the package. Users must accept the license for the installation process to continue. The text for the license comes from the file pointed to by the CPACK_RESOURCE_FILE_LICENSE variable.<br />
<br />
The other screens in the installation process are not customizable from CPack. more advanced features of this installer, there are two CPack templates that you can modify, Modules/CPack.Info.plist.in and Modules/CPack.Description.plist.in. These files can be replaced by using the CMAKE_MODULE_PATH variable to point to a directory in your project containing a modified copy of either or both.<br />
<br />
<<Figure 9.15: Introduction Screen MaCPackageMaker>><br />
<br />
<<Figure 9.16: Readme section of MaCPackage wizard>><br />
<br />
<<Figure 9.17: License screen MaCPackager>><br />
<br />
<br />
===CPack for Mac OS X Drag and Drop===<br />
<br />
CPack also supports the creation of a Drag and Drop installer for the Mac. In this case a .dmg disk image is created. The image contains both a symbolic link to the /Applications directory and a copy of the project's install tree. In this case it is best to use a Mac application bundle or a single folder containing your relocatable installation as the only install target for the project. The variable CPACK_PACKAGE_EXECUTABLES is used to point to the application bundle for the project.<br />
<br />
<<Figure 9.18: Drag and Drop License dialog>><br />
<br />
<br />
===CPack for Mac OS X X11 Applications===<br />
<br />
<<Figure 9.19: Resulting Drag and Drop folders>><br />
<br />
CPack also includes an OS X X11 package maker generator. This can be used to package X11 based applications, as well as make them act more like native OS X applications by wrapping them with a script that will allow users to run them as they would any native OS X application. Much like the OS X PackageMaker generator, the OS X X11 generator creates a disk image .dmg file. In this example, an X11 application called KWPolygonalObjectViewerExample is packaged with the OS X X11 CPack generator.<br />
<br />
<<Figure 9.20: Mac OS X X11 package disk image>><br />
<br />
This figure shows the disk image created. In this case the CPACK_PACKAGE_NAME was set to KWPolygonalObjectViewerExample, and the version information was left with the CPack default of 0.1.1. The variable CPACK_PACKAGE_EXECUTABLES was set to the pair KWPolygonalObjectViewerExample and KWPolygonalObjectViewerExample, the installed X11 application is called KWPolygonalObjectViewerExample.<br />
<br />
The above figure shows what a user would see after clicking on the .dmg file created by CPack. Mac OS X is mounting this disk image as a disk<br />
<br />
<<Figure 9.21: Opening OS X X11 disk image>><br />
<br />
<<Figure 9.22: Mounted .dmg disk image>><br />
<br />
This figure shows the mounted disk image. It will contain a symbolic link to the /Applications directory for the system, and it will contain an application bundle for each executable found in CPACK_PACKAGE_EXECUTABLES. The users can then drag and drop the applications into the Applications folder as seen in the figure below.<br />
<br />
<<Figure 9.23: Drag and drop application to Applications>><br />
<br />
CPack actually provides a C++ based executable that can run an X11 application via the Apple scripting language. The application bundle installed will run that forwarding application when the user double clicks on KWPolygonalObjectViewerExample. This script will make sure that the X11 server is started. The script that is run can be found in CMake/Modules/CPack.RuntimeScript.in. The source for the script launcher C++ program can be found in Source/CPack/OSXScriptLauncher.cxx.<br />
<br />
<br />
===CPack for Debian Packages===<br />
<br />
A Debian package .deb is simply an "ar" archive. CPack includes the code for the BSD style ar that is required by Debian packages. The Debian packager uses the standard set of CPack variables to initialize a set of Debian specific variables. These can be overridden in the CPACK_PROJECT_CONFIG_FILE; the name of the generator is "DEB". The variables used by the DEB generator are as follows:<br />
<br />
'''CPACK_DEBIAN_PACKAGE_NAME''' defaults to lower case of CPACK_PACKAGE_NAME .<br />
<br />
'''CPACK_DEBIAN_PACKAGE_ARCHITECTURE''' defaults to i386.<br />
<br />
'''CPACK_DEBIAN_PACKAGE_DEPENDS''' This must be set to other packages that this package depends on, and if empty a warning is emitted.<br />
<br />
'''CPACK_DEBIAN_PACKAGE_MAINTAINER''' defaults to value of CPACK_PACKAGE_CONTACT<br />
<br />
'''CPACK_DEBIAN_PACKAGE_DESCRIPTION''' defaults to value of CPACK_PACKAGE_DESCRIPTION_SUMMARY<br />
<br />
'''CPACK_DEBIAN_PACKAGE_SECTION''' defaults to devel<br />
<br />
'''CPACK_DEBIAN_PACKAGE_PRIORITY''' defaults to optional<br />
<br />
<br />
===CPack for RPM===<br />
<br />
CPack has support for creating Linux RPM files. The name of the generator as set in CPACK_GENERATOR is "RPM". The RPM package capability requires that rpmbuild is installed on the machine and is in PATH. The RPM packager uses the standard set of CPack variables to initialize RPM specific variables. The RPM specific variables are as follows:<br />
<br />
'''CPACK_RPM_PACKAGE_SUMMARY''' defaults to value of CPACK_PACKAGE_DESCRIPTION_SUMMARY<br />
<br />
'''CPACK_RPM_PACKAGE_NAME''' defaults to lower case of CPACK_PACKAGE_NAME<br />
<br />
'''CPACK_RPM_PACKAGE_VERSION''' defaults to value of CPACK_PACKAGE_VERSION.<br />
<br />
'''CPACK_RPM_PACKAGE_ARCHITECTURE''' defaults to i386<br />
<br />
'''CPACK_RPM_PACKAGE_RELEASE''' defaults to 1. This is the version of the RPM file, not the version of the software being packaged.<br />
<br />
'''CPACK_RPM_PACKAGE_GROUP''' defaults to none.<br />
<br />
'''CPACK_RPM_PACKAGE_VENDOR''' defaults to value of CPACK_PACKAGE_VENDOR<br />
<br />
<br />
===CPack Files===<br />
<br />
There are a number of files that are used by CPack that can be useful for learning more about how CPack works and what options you can set. These files can also be used as the starting point for other generators for CPack. These files can mostly be found in the Modules and Templates directories of CMake and typically start with the prefix CPack. As of version 2.8.8, you may also refer to cpack --help-variable-list and cpack --help-variable for the full set of documented CPACK_* variables.<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:MastringCmakeVersion31]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=MastringCmakeVersion31:Chapter_09&diff=5607MastringCmakeVersion31:Chapter 092020-09-21T12:03:55Z<p>Onionmixer: CMAKE Chapter 9</p>
<hr />
<div>==CHAPTER NINE::PACKAGING WITH CPack==<br />
<br />
CPack is a powerful, easy to use, cross-platform software packaging tool distributed with CMake since version 2.4.2. It uses the generators concept from CMake to abstract package generation on specific platforms. It can be used with or without CMake, but it may depend on some software being installed on the system. Using a simple configuration file, or using a CMake module, the author of a project can package a complex project into a simple installer. This chapter will describe how to apply CPack to a CMake project.<br />
<br />
<br />
===CPack Basics===<br />
<br />
Users of your software may not always want to, or be able to, build the software in order to install it. The software may be closed source, or it may take a long time to compile, or in the case of an end user application, the users may not have the skill or the tools to build the application. For these cases, what is needed is a way to build the software on one machine, and then move the install tree to a different m achine. The most basic way to do this is to use the DESTDIR environment variable to install the software into a temporary location, then to tar or zip up that directory and move it to another machine. However, the DESTDIR approach falls short on Windows, simply because path names typically start with a drive letter (C:/) and you cannot simply prefix one full path with another and get a valid path name. Another more powerful approach is to use CPack, included in CMake.<br />
<br />
CPack is a tool included with CMake, it can be used to create installers and packages for projects. CPack can create two basic types of packages, source and binary. CPack works in much the same way as CMake does for building software. It does not aim to replace native packaging tools, rather it provides a single interface to a variety of tools. Currently CPack supports the creation of Windows installers using NullSoft installer NSIS, Mac OS X PackageMaker tool, OS X Drag and Drop, OS X X11 Drag and Drop, Cygwin Setup packages, Debian packages, RPMs, .tar.gz, .sh (self extracting .tar.gz files), and .zip compressed files. The implementation of CPack works in a similar way to CMake. For each type of packaging tool supported, there is a CPack generator written in C++ that is used to run the native tool and create the package. For simple tar based packages, CPack includes a library version of tar and does not require tar to be installed on the system. For many of the other installers, native tools must be present for CPack to function.<br />
<br />
With source packages, CPack makes a copy of the source tree and creates a zip or tar file. For binary packages, the use of CPack is tied to the install commands working correctly for a project. When setting up install commands, the first step is to make sure the files go into the correct directory structure with the correct permissions. The next step is to make sure the software is relocatable and can run in an installed tree. This may require changing the software itself, and there are many techniques to do that for different environments that go beyond the scope of this book. Basically, executables should be able to find data or other files using relative paths to the location of where it is installed. CPack installs the software into a temporary directory, and copies the install tree into the format of the native packaging tool. Once the install commands have been added to a project, enabling CPack in the simplest case is done by including the CPack.cmake file into the project.<br />
<br />
<br />
====Simple Example====<br />
<br />
The most basic CPack project would look like this<br />
<br />
<syntaxhighlight lang="text"><br />
project(CoolStuff)<br />
add_executable(coolstuff coolstuff.cxx)<br />
install(TARGETS coolstuff RUNTIME DESTINATION bin)<br />
include(CPack)<br />
</syntaxhighlight><br />
<br />
In the CoolStuff project, an executable is created and installed into the directory bin. Then the CPack file is included by the project. At this point project CoolStuff will have CPack enabled. To run CPack for a CoolStuff, you would first build the project as you would any other CMake project. CPack adds several targets to the generated project. These targets in Makefiles are package and package_source, and PACKAGE in Visual Studio and Xcode. For example, to build a source and binary package for Cool Stuff using a Makefile generator you would run the following commands:<br />
<br />
<syntaxhighlight lang="text"><br />
mkdir build<br />
cd build<br />
cmake ../CoolStuff<br />
make<br />
make package<br />
make package_source<br />
</syntaxhighlight><br />
<br />
This would create a source zip file called CoolStuff-0.1.1-Source.zip, a NSIS installer called CoolStuff-0.1.1-win32.exe, and a binary zip file CoolStuff-0.1.1-win32.zip. The same thing could be done using the CPack command line.<br />
<br />
<syntaxhighlight lang="text"><br />
cd build<br />
cpack -C CPackConfig.cmake<br />
cpack -C CPackSourceConfig.cmake<br />
</syntaxhighlight><br />
<br />
<br />
====What Happens When CPack.cmake Is Included?====<br />
<br />
When the include(CPack) command is executed, the CPack.cmake file is included into the project. By default this will use the configure_file command to create CPackConfig.cmake and CPackSourceConfig.cmake in the binary tree of the project. These files contain a series of set commands that set variables for use when CPack is run during the packaging step. The names of the files that are configured by the CPack.cmake file can be customized with these two variables; CPACK_OUTPUT_CONFIG_FILE which defaults to CPackConfig.cmake and CPACK_SOURCE_OUTPUT_CONFIG_FILE which defaults to CPackSourceConfig.cmake.<br />
<br />
The source for these files can be found in the Templates/CPackConfig.cmake.in. This file contains some comments, and a single variable that is set by CPack.cmake. The file contains this line of CMake code:<br />
<br />
<syntaxhighlight lang="text"><br />
@_CPACK_OTHER_VARIBLES_@<br />
</syntaxhighlight><br />
<br />
If the project contains the file CPackConfig.cmake.in in the top level of the source tree, that file will be used instead of the file in the Templates directory. If the project contains the file CPackSourceConfig.cmake.in, then that file will be used for the creation of CPackSourceConfig.cmake.<br />
<br />
The configuration files created by CPack.cmake will contain all the variables that begin with "CPACK_" in the current project. This is done using the command<br />
<br />
<syntaxhighlight lang="text"><br />
get_cmake_property(res VARIABLES)<br />
</syntaxhighlight><br />
<br />
The above command gets all variables defined for the current CMake project. Some CMake code then looks for all variables starting with "CPACK_", and each variable found is configured into the two configuration files as CMake code. For example, if you had a variable set like this in your CMake project<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_NAME "CoolStuff")<br />
</syntaxhighlight><br />
<br />
CPackConfig.cmake and CPackSourceConfig.cmake would have the same thing in them:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_NAME "CoolStuff")<br />
</syntaxhighlight><br />
<br />
It is important to keep in mind that CPack is run after CMake on the project. CPack uses the same parser as CMake, but will not have the same variable values as the CMake project. It will only have the variables that start with CPACK_, and these variables will be configured into a configuration file by CMake. This can cause some errors and confusion if the values of the variables use escape characters. Since they are getting parsed twice by the CMake language, they will need double the level of escaping. For example, if you had the following in your CMake project:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_VENDOR "Cool \"Company\"")<br />
</syntaxhighlight><br />
<br />
The resulting CPack files would have this:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_VENDOR "Cool "Company"")<br />
</syntaxhighlight><br />
<br />
That would not be exactly what you would want or expect. In fact, it just wouldn't work. To get around this problem, there are two solutions. The first is to add an additional level of escapes to the original set command like this:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_VENDOR "Cool \\\"Company\\\"")<br />
</syntaxhighlight><br />
<br />
This would result in the correct set command which would look like this:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_VENDOR "Cool \"Company\"")<br />
</syntaxhighlight><br />
<br />
The second solution to the escaping problem is to use a CPack project config file, explained in the next section.<br />
<br />
<br />
====Adding Custom CPack Options====<br />
<br />
To avoid the escaping problem a project specific CPack configure file can be specified. This file will be loaded by CPack after CPackConfig.cmake or CPackSourceConfig.cmake is loaded, and CPACK_GENERATOR will be set to the CPack generator being run. Variables set in this file only require one level of CMake escapes. This file can be configured or not, and contains regular CMake code. In the example above, you could move CPACK_FOOBAR into a file MyCPackOptions.cmake. in and configure that file into the build tree of the project. Then set the project configuration file path like this:<br />
<br />
<syntaxhighlight lang="text"><br />
configure_file ("${PROJECT_SOURCE_DIR} /MyCPackOptions.cmake.in"<br />
"$(PROJECT_BINARY_DIR} /MyCPackOptions.cmake"<br />
@ONLY)<br />
set (CPACK_PROJECT_CONFIG_FILE<br />
"${PROJECT_BINARY_DIR}/CMakeCPackOptions.cmake")<br />
</syntaxhighlight><br />
<br />
Where MyCPackOptions.cmake.in contained:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_VENDOR "Cool \"Company\"")<br />
</syntaxhighlight><br />
<br />
The CPACK_PROJECT_CONFIG_FILE variable should contain the full path to the CPack config file for the project, as seen in the above example. This has the added advantage that the CMake code can contain if statements based on the CPACK_GENERATOR value, so that packager specific values can be set for a project. For example, the CMake project sets the icon for the installer in this file:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_NSIS_MUI_ICON<br />
"@CMake_SOURCE_DIR@/Utilities/Release\\CMakeLogo.ico")<br />
</syntaxhighlight><br />
<br />
Note that the path has forward slashes except for the last part which has an escaped as the path separator. As of the writing of this book, NSIS needed the last part of the path to have a Windows style slash. If you do not do this, you may get the following error:<br />
<br />
<syntaxhighlight lang="text"><br />
File: ".../Release/CMakeLogo.ico" -> no files found.<br />
Usage: File [/nonfatal] [/a] ([/r] [/x filespec [...]]<br />
filespec [...] | /oname=outfile one_file_only)<br />
</syntaxhighlight><br />
<br />
<br />
====Options Added by CPack====<br />
<br />
In addition to creating the two configuration files, CPack.cmake will add some advanced options to your project. The options added depend on the environment and OS that CMake is running on, and control the default packages that are created by CPack. These options are of the form CPACK_<CPackGeneratorName>, where generator names available o n each platform can be found in the following table:<br />
<br />
<<Figure platform table>><br />
<br />
Turning these options on or off affects the packages that are created when running CPack with no options. If<br />
the option is off in the CMakeCache.txt file for the project, you can still build that package type by specifying<br />
the -G option to the CPack command line.<br />
<br />
<br />
===CPack Source Packages===<br />
<br />
Source packages in CPack simply copy the entire source tree for a project into a package file, and no install rules are used as they are in the case of binary packages. Out of source builds should be used to avoid having extra binary stuff polluting the source package. If you have files or directories in your source tree that are not wanted in the source package, you can use the variable CPACK_SOURCE_IGNORE_FILES to exclude things from the package. This variable contains a list of regular expressions. Any file or directory that matches a regular expression in that list will be excluded from the sources. The default setting is as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
"/CVS/;/\\\\\\\\.svn/;\\\\\\\\/swp$;\\\\\\\\/#;/#"<br />
</syntaxhighlight><br />
<br />
There are many levels of escapes used in the default value as this variable is parsed by CMake once and CPack again. It is important to realize that the source tree will not use any install commands, it will simply copy the entire source tree minus the files it is told to ignore into the package. To avoid the multiple levels of escape, the file referenced by CPACK_PROJECT_CONFIG_FILE should be used to set this variable. The expression is a regular expression and not a wild card statement, see Chapter 4 for more information about CMake regular expressions.<br />
<br />
<br />
===CPack Installer Commands===<br />
<br />
Since binary packages require CPack to interact with the install rules of the project being packaged, this section will cover some of the options CPack provides to interact with the install rules of a project. CPack can work with CMake's install scripts or with external install commands.<br />
<br />
====CPack and CMake install commands===<br />
<br />
In most CMake projects, using the CMake install rules will be sufficient to create the desired package. By default CPack will run the install rule for the current project. However, if you have a more complicated project, you can specify sub-projects and install directories with the variable CPACK_INSTALL_CMAKE_PROJECTS. This variable should hold quadruplets of install directory, install project name, install component, and install subdirectory. For example, if you had a project with a sub project called MySub that was compiled into a directory called SubProject, and you wanted to install all of its components, you would have this:<br />
<br />
<syntaxhighlight lang="text"><br />
SET (CPACK_INSTALL_CMAKE_PROJECTS "SubProject;MySub;ALL;/")<br />
</syntaxhighlight><br />
<br />
<br />
====CPack and DESTDIR====<br />
<br />
By default CPack does not use the DESTDIR option during the installation phase. Instead it sets the CMAKE_INSTALL_PREFIX to the full path of the temporary directory being used by CPack to stage the install package.<br />
This can be changed by setting CPACK_SET_DESTDIR (page 682) to on. If the CPACK_SET_DESTDIR option is on, CPack will use the project's cache value for CPACK_INSTALL_PREFIX, and set DESTDIR to the temporary staging area. This allows absolute paths to be installed under the temporary directory. Relative paths are installed into DESTDIR/${project's CMAKE_INSTALL_PREFIX} where DESTDIR is set to the temporary staging area.<br />
<br />
As noted earlier, the DESTDIR approach does not work when the install rules reference files by Windows full paths beginning with drive letters (C:/).<br />
<br />
When doing a non-DESTDIR install for packaging, which is the default, any absolute paths are installed into absolute directories, and not into the package. Therefore, projects that do not use the DESTDIR option, must not use any absolute paths in install rules. Conversely, projects that use absolute paths, must use the DESTDIR option.<br />
<br />
One other variable can be used to control the root path projects are installed into, the CPACK_PACKAGING_INSTALL_PREFIX (page 682). By default many of the generators install into the directory /usr. That variable can be used to change that to any directory, including just /.<br />
<br />
<br />
====CPack and other installed directories====<br />
<br />
It is possible to run other install rules if the project is not CMake based. This can be done by using the variables CPACK_INSTALL_COMMANDS, and CPACK_INSTALLED_DIRECTORIES. CPACK_INSTALL_COMMANDS are commands that will be run during the installation phase of the packaging. CPACK_INSTALLED_DIRECTORIES should contain pairs of directory and subdirectory. The subdirectory can be '.' to be installed in the top-level directory of the installation. The files in each directory will be copied to the corresponding subdirectory of the CPack staging directory and packaged with the rest of the files.<br />
<br />
<br />
===CPack for Windows Installer NSIS===<br />
<br />
To create Windows style wizard based installer programs, CPack uses NSIS (NullSoft Scriptable Install System). More information about NSIS can be found at the NSIS home page: http://nsis.sourceforge.net/ NSIS is a powerful tool with a scripting language used to create professional Windows installers. To create Windows installers with CPack, you will need NSIS installed on your machine.<br />
<br />
CPack uses configured template files to control NSIS. There are two files configured by CPack during the creation of a NSIS installer. Both files are found in the CMake Modules directory. Modules/NSIS.template.in is the template for the NSIS script, and Modules/NSIS.InstallOptions.ini.in is the template for the modern user interface or MUI used by NSIS. The install options file contains the information about the pages used in the install wizard. This section will describe how to configure CPack to create an NSIS install wizard.<br />
<br />
<br />
====CPack Variables Used by CMake for NSIS====<br />
<br />
This section contains screen captures from the CMake NSIS install wizard. For each part of the installer that can be changed or controlled from CPack, the variables and values used are given.<br />
<br />
The first thing that a user will see of the installer in Windows is the icon for the installer executable itself. By default the installer will have the Null Soft Installer icon, as seen in Figure 9.1 for the 20071023 CMake installer. This icon can be changed by setting the variable CPACK_NSIS_MUI_ICON. The installer for 20071025 in the same figure shows the CMake icon being used for the installer.<br />
<br />
<<figure 9.1>><br />
<br />
The last thing a users will see of the installer in Windows is the icon for the uninstall executable, as seen in Figure 9.2. This option can be set with the CPACK_NSIS_MUI_UNIICON variable. Both the install and uninstall icons must be the same size and format, a valid windows .ico file usable by Windows Explorer. The icons are set like this:<br />
<br />
<syntaxhighlight lang="text"><br />
# set the install/uninstall icon used for the installer itself<br />
set (CPACK_NSIS_MUI_ICON<br />
"${CMake_SOURCE_DIR}/Utilities/Release\\CMakeLogo.ico")<br />
set (CPACK_NSIS_MUI_UNIICON<br />
"${CMake_SOURCE_DIR}/Utilities/Release\\CMakeLogo.ico")<br />
</syntaxhighlight><br />
<br />
<<figure 9.2>><br />
<br />
On Windows, programs can also be removed using the Add or Remove Programs tool from the control panel as seen in Figure 9.3. The icon for this should be embedded in one of the installed executables. This can be set like this:<br />
<br />
<syntaxhighlight lang="text"><br />
# set the add/remove programs icon using an installed executable<br />
SET(CPACK_NSIS_INSTALLED_ICON_NAME "bin\\cmake-gui.exe"<br />
</syntaxhighlight><br />
<br />
<<figure 9.3>><br />
<br />
<<figure 9.4>><br />
<br />
When running the installer, the first screen of the wizard will look like Figure 9.4. In this screen you can control the name of the project that shows up in two places on the screen. The name used for the project is controlled by the variable CPACK_PACKAGE_INSTALL_DIRECTORY or CPACK_NSIS_PACKAGE_NAME. In this example, it was set to "CMake 2.5'' like this:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_INSTALL_DIRECTORY "CMake<br />
${CMake_VERSION_MAJOR}.${CMake_VERSION_MINOR}")<br />
<br />
set (CPACK_NSIS_PACKAGE_NAME "CMake<br />
${CMake_VERSION_MAJOR}.${CMake_VERSION_MINOR}") <br />
</syntaxhighlight><br />
<br />
<<figure 9.5>><br />
<br />
The second page of the install wizard can be seen in Figure 9.5. This screen contains the license agreement and there are several things that can be configured on this page. The banner bitmap to the left of the "License Agreement" label is controlled by the variable CPACK_PACKAGE_ICON like this:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_ICON<br />
"$(CMake_SOURCE_DIR)/Utilities/Release\\CMakeInstall.bmp")<br />
</syntaxhighlight><br />
<br />
CPACK_PACKAGE_INSTALL_DIRECTORY is used again on this page everywhere you see the text "CMake 2.5". The text of the license agreement is set to the contents of the file specified in the CPACK_RESOURCE_FILE_LICENSE variable. CMake does the following:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_RESOURCE_FILE_LICENSE<br />
"$(CMAKE_CURRENT_SOURCE_DIR)/Copyright.txt")<br />
</syntaxhighlight><br />
<br />
<<figure 9.6>><br />
<br />
The third page of the installer can be seen in Figure 9.6. This page will only show up if CPACK_NSIS_MODIFY_PATH is set to on. If you check the Create "name" Desktop Icon button, and you put executable names in the variable CPACK_CREATE_DESKTOP_LINKS, then a desktop icon for those executables will be created. For example, to create a desktop icon for the cmake-gui program of CMake, the following is done:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_CREATE_DESKTOP_LINKS cmake-gui)<br />
</syntaxhighlight><br />
<br />
Multiple desktop links can be created if your application contains more than one executable. The link will be created to the Start Menu entry, so CPACK_PACKAGE_EXECUTABLES, which is described later in this section, must also contain the application in order for a desktop link to be created.<br />
<br />
<<figure 9.7>><br />
<br />
The fourth page of the installer seen in Figure 9.7 uses the variable CPACK_PACKAGE_INSTALL_DIRECTORY to specify the default destination folder in Program Files. The following CMake code was used to set that default:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_INSTALL_DIRECTORY "CMake<br />
${CMake_VERSION_MAJOR}.${CMake_VERSION_MINOR}")<br />
</syntaxhighlight><br />
<br />
The remaining pages of the installer wizard do not use any additional CPack variables, and are not included in this section. Another important option that can be set by the NSIS CPack generator is the registry key used. There are several CPack variables that control the default key used. The key is defined in the NSIS.template.in file as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
!define MUI_STARTMENUPAGE_REGISTRY_KEY<br />
"Software\@CPACK_PACKAGE_VENDOR@\@CPACK_PACKAGE_INSTALL_REGISTRY_KEY@"<br />
</syntaxhighlight><br />
<br />
Where the CPACK_PACKAGE_VENDOR value defaults to Humanity, and CPACK_PACKAGE_INSTALL_REGISTRY_KEY defaults to ${CPACK_PACKAGE_NAME} ${CPACK_PACKAGE_VERSION}<br />
<br />
So for CMake 2.5.20071025 the registry key would look like this:<br />
<br />
<syntaxhighlight lang="text"><br />
HKEY_LOCAL_MACHINE\SOFTWARE\Kitware\CMake 2.5.20071025<br />
</syntaxhighlight><br />
<br />
<br />
====Creating Windows Short Cuts in the Start Menu====<br />
<br />
There are two variables that control the short cuts that are created in the Windows Start menu by NSIS. The<br />
variables contain lists of pairs, and must have an even number of elements to work correctly. The first is<br />
CPACK_P AC KAGE_E X E C U T AB L E S , it should contain the name of the executable file followed by the name<br />
of the shortcut text. For example in the case of CMake, the executable is called cmake-gui, but the shortcut<br />
is named "CMake". CMake does the following to create that shortcut:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_EXECUTABLES "cmake-gui" "CMake")<br />
</syntaxhighlight><br />
<br />
The second is C P AC K_N S I S_MEN U_L I NK S . This variable contains arbitrary links into the install tree, or to<br />
external web pages. The first of the pair is always the existing source file or location, and the second is the<br />
name that will show up in the Start menu. To add a link to the help file for cmake-gui and a link to the CMake<br />
web page add the following:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_NSIS_MENU_LINKS<br />
"doc/cmake-${VERSION_MAJOR}.${VERSION_MINOR}/cmake-gui.html"<br />
"cmake-gui Help" "http://www.cmake.org" "CMake Web Site")<br />
</syntaxhighlight><br />
<br />
<br />
====Advanced NSIS CPack Options====<br />
<br />
In addition to the variables already discussed, CPack provides a few additional variables that are directly configured into the NSIS script file. These can be used to add NSIS script fragments to the final NSIS script used to create the installer. They are as follows:<br />
<br />
'''CPACK_NSIS_EXTRA_INSTALL_COMMANDS''' Extra commands used during install .<br />
<br />
'''CPACK_NSIS_EXTRA_UNINSTALL_COMMANDS''' Extra commands used during uninstall.<br />
<br />
'''CPACK_NSIS_CREATE_ICONS_EXTRA''' Extra NSIS commands in the icon section of the script.<br />
<br />
'''CPACK_NSIS_DELETE_ICONS_EXTRA''' Extra NSIS commands in the delete icons section of the script.<br />
<br />
When using these variables the NSIS documentation should be referenced, and the author should look at the NSIS.template.infile for the exact placement of the variables.<br />
<br />
<br />
====Setting File Extension Associations With NSIS====<br />
<br />
One example of a useful thing that can be done with the extra install commands is to create associations from file extensions to the installed application. For example, if you had an application CoolStuff that could open files with the extension .cool, you would set the following extra install and uninstall commands:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_NSIS_EXTRA_INSTALL_COMMANDS "<br />
WriteRegStr HKCR '.cool' '' 'CoolFile'<br />
WriteRegStr HKCR 'CoolFile' '' 'Cool Stuff File'<br />
WriteRegStr HKCR 'CoolFile\\shell' '' 'open'<br />
WriteRegStr HKCR 'CoolFile\\Defaulticon' \\<br />
'' '$INSTDIR\\bin\\coolstuff.exe,0'<br />
WriteRegStr HKCR 'CoolFile\\shell\\open\\command' \\<br />
'' '$INSTDIR\\bin\\coolstuff.exe \"%1\"'<br />
WriteRegStr HKCR \"CoolFile\\shell\\edit' \\<br />
'' 'Edit Cool File'<br />
WriteRegStr HKCR 'CoolFile\\shell\\edit\\command' \\<br />
'' '$INSTDIR\\bin\\coolstuff.exe \"%1\"'<br />
System::Call \\<br />
'Shell32::SHChangeNotify(i 0x8000000, i 0, i 0, i 0)'<br />
")<br />
<br />
set (CPACK_NSIS_EXTRA UNINSTALL COMMANDS "<br />
DeleteRegKey HKCR '.cool'<br />
DeleteRegKey HKCR 'CoolFile'<br />
")<br />
</syntaxhighlight><br />
<br />
This creates a Windows file association to all files ending in .cool, so that when a user double clicks on a .cool file, coolstuff.exe is run with the full path to the file as an argument. This also sets up an association for editing the file from the windows right-click menu to the same coolstuff.exe program. The Windows explorer icon for the file is set to the icon found in the coolstuff.exe executable. When it is uninstalled, the registry keys are removed. Since the double quotes and Windows path separators must be escaped, it is best to put this code into the CPACK_PROJECT_CONFIG_FILE for the project.<br />
<br />
<syntaxhighlight lang="text"><br />
configure_file(<br />
${CoolStuff_SOURCE_DIR}/CoolStuffCPackOptions.cmake.in<br />
${CoolStuff_BINARY_DIR}/CoolStuffCPackOptions.cmake @ONLY)<br />
<br />
set (CPACK_PROJECT_CONFIG_FILE<br />
${CoolStuff_BINARY_DIR}/CoolStuffCPackOptions.cmake)<br />
include (CPack)<br />
</syntaxhighlight><br />
<br />
<br />
====Installing Microsoft Run Time Libraries====<br />
<br />
Although not strictly an NSIS CPack command, if you are creating applications on Windows with the Microsoft compiler you will most likely want to distribute the run time libraries from Microsoft alongside your project. In CMake, all you need to do is the following:<br />
<br />
<syntaxhighlight lang="text"><br />
include (InstallRequiredSystemLibraries)<br />
</syntaxhighlight><br />
<br />
This will add the compiler run time libraries as install files that will go into the bin directory of your application. If you do not want the libraries to go into the bin directory, you would do this:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CMAKE_INSTALL_SYSTEM_RUNTIME_LIBS_SKIP TRUE)<br />
include (InstallRequiredSystemLibraries)<br />
install (PROGRAMS ${CMAKE_INSTALL_SYSTEM_RUNTIME_LIBS}<br />
DESTINATION mydir)<br />
</syntaxhighlight><br />
<br />
It is important to note that the run time libraries must be right next to the executables of your package in order for Windows to find them. With Visual Studio 2005 and 2008, side by side manifest files are also required to be installed with your application when distributing the run time libraries. If you want to package a debug version of your software you will need to set CMAKE_INSTALL_DEBUG_LIBRARIES to ON prior to the include. Be aware, however, that the license terms may prohibit you from re-distributing the debug libraries. Double check the licensing terms for the version of Visual Studio you're using before deciding to set CMAKE_INSTALL_DEBUG_LIBRARIES to ON.<br />
<br />
<br />
====CPack Component Install Support====<br />
<br />
By default, CPack's installers consider all of the files installed by a project as a single, monolithic unit: either the whole set of files is installed, or none of the files are installed. However, with many projects it makes sense for the installation to be subdivided into distinct, user-selectable components. Some users may want to install only the comand-line tools for a project, while other users might want the GUI or the header files.<br />
<br />
This section describes how to configure CPack to generate component-based installers that allow users to select the set of project components that they wish to install. As an example, a simple installer will be created for a library that has three components: a library binary, a sample application, and a C++ header file. When finished the resulting installers for Windows and Mac OS X look like the ones in Figure 9.8.<br />
<br />
<<figure 9.8: Mac and Windows Component Installers>><br />
<br />
The simple example we will be working with is as follows; it has a library and an executable. CPack commands that have already been covered are used.<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required (VERSION 2.6.0 FATAL_ERROR)<br />
project (MyLib)<br />
<br />
add_library(mylib mylib.cpp)<br />
<br />
add_executable(mylibapp mylibapp.cpp)<br />
target_link_libraries(mylibapp mylib)<br />
<br />
install (TARGETS mylib ARCHIVE DESTINATION lib)<br />
install (TARGETS mylibapp RUNTIME DESTINATION bin)<br />
install (FILES mylib.h DESTINATION include)<br />
<br />
# add CPack to project<br />
set (CPACK_ PACKAGE NAME "MyLib")<br />
set (CPACK_PACKAGE_VENDOR "CMake.org")<br />
set (CPACK_PACKAGE_DESCRIPTION_SUMMARY<br />
"MyLib - CPack Component Installation Example")<br />
set (CPACK_PACKAGE_VERSION "1.0.0")<br />
set (CPACK_PACKAGE VERSION MAJOR "1")<br />
set (CPACK_PACKAGE VERSION_MINOR "0")<br />
set (CPACK_PACKAGE VERSION_PATCH "0")<br />
set (CPACK_PACKAGE_INSTALL_DIRECTORY "CPack Component Example")<br />
<br />
# This must always be after all CPACK\_\* variables are defined<br />
include (CPack)<br />
</syntaxhighlight><br />
<br />
=====Specifying Com ponents=====<br />
<br />
The first step in building a component-based installation is to identify the set of installable components. In this example, three components will be created: the library binary, the application, and the header file. This decision is arbitrary and project-specific, but be sure to identify the components that correspond to units of functional ity important to your user, rather than basing the components on the internal structure of your program .<br />
<br />
For each of these components, we need to identify which component each of the installed files belong in. For each INSTALL command in CMakeLists.txt, add an appropriate COMPONENT argument stating which component the installed files will be associated with :<br />
<br />
<syntaxhighlight lang="text"><br />
install (TARGETS mylib<br />
ARCHIVE<br />
DESTINATION lib<br />
COMPONENT libraries)<br />
install (TARGETS mylibapp<br />
RUNTIME<br />
DESTINATION bin<br />
COMPONENT applications)<br />
install(FILES mylib.h<br />
DESTINATION include<br />
COMPONENT headers)<br />
</syntaxhighlight><br />
<br />
Note that the COMPONENT argument to the INSTALL command is not new; it has been a part of CMake's INSTALL command to allow installation of only part of a project. If you are using any of the older installation commands (INSTALL_TARGETS, INSTALL_FILES, etc.), you will need to convert them to INSTALL commands in order to use components.<br />
<br />
The next step is to notify CPack of the names of all of the components in your project by calling the cpack_add_component function for each component of the package:<br />
<br />
<syntaxhighlight lang="text"><br />
cpack_add_component(applications)<br />
cpack_add_component(libraries)<br />
cpack_add_component(headers)<br />
</syntaxhighlight><br />
<br />
At this point you can build a component-based installer with CPack that will allow one to independently install the applications, libraries, and headers of MyLib. The Windows and Mac OS X installers will look like the ones shown in Figure 9.9.<br />
<br />
<<figure 9.9:Windows and Mac OS X Component Installer First Page>><br />
<br />
<br />
=====Naming Components=====<br />
<br />
At this point, you may have noted that the names of the actual components in the installer are not very descriptive: they just say "applications," "libraries," or "headers," as specified in the component names. These names can be improved by using the DISPLAY_NAME option in the cpack_add_component function:<br />
<br />
<syntaxhighlight lang="text"><br />
cpack_add_component (applications DISPLAY_NAME<br />
"MyLib Application")<br />
cpack_add_component (libraries DISPLAY_NAME "Libraries")<br />
cpack_add_component (headers DISPLAY_NAME "C++ Headers")<br />
</syntaxhighlight><br />
<br />
Any macro prefixed with CPACK_COMPONENT_${COMPNAME}, where ${COMPNAME} is the uppercase name of a component, is used to set a particular property of that component in the installer. Here, we set the DISPLAY_NAME property of each of our components so that we get human-readable names. These names will be listed in the selection box rather than the internal component names "applications," "libraries," "headers,"<br />
<br />
<<figure 9.10: Windows and Mac OS X Installers with named components>><br />
<br />
<br />
=====Adding Component Descriptions=====<br />
<br />
There are several other properties associated with components, including the ability to make a component hidden, required, or disabled by default, that provide additional descriptive information. Of particular note is the DESCRIPTION property, which provides some descriptive text for the component. This descriptive text will show up in a separate "description" box in the installer, and will be updated either when the user's mouse hovers over the name of the corresponding component (Windows), or when the user clicks on a component (Mac OS X). We will add a description for each of our components below :<br />
<br />
<syntaxhighlight lang="text"><br />
cpack_add_component (applications DISPLAY_NAME "MyLib Application"<br />
DESCRIPTION<br />
"An extremely useful application that makes use of MyLib"<br />
)<br />
cpack_add_ component (libraries DISPLAY NAME "Libraries"<br />
DESCRIPTION<br />
"Static libraries used to build programs with MyLib"<br />
)<br />
cpack_add_component (headers DISPLAY_NAME "C++ Headers"<br />
DESCRIPTION<br />
"C/C++ header files for use with MyLib"<br />
)<br />
</syntaxhighlight><br />
<br />
Generally, descriptions should provide enough information for the user to make a decision on whether to install the component, but should not themselves be more than a few lines long (the "Description" box in the installers tends to be small). Figure 9.11 shows the description display for both the Windows and Mac OS X installers.<br />
<br />
<<figure 9.11: Component Installers with descriptions>><br />
<br />
<br />
=====Component Interdependencies=====<br />
<br />
With most projects the various components are not completely independent. For example, an application component may depend on the shared libraries in another component to execute properly, such that installing the application component without the corresponding shared libraries would result in an unusable installation. CPack allows you to express the dependencies between components, so that a component will only be installed if all of the other components it depends on are also installed.<br />
<br />
To illustrate component dependencies we will place a simple restriction on our component-based installer. Since we do not provide source code in our installer, the C++ header files we distribute can only actually be used if the user also installs the library binary to link their program against. Thus, the "headers" component depends on the availability of the "libraries" component. We can express this notion by setting the DEPENDS property for the HEADERS component as such :<br />
<br />
<syntaxhighlight lang="text"><br />
cpack_add_component (headers DISPLAY_NAME "C++ Headers"<br />
DESCRIPTION<br />
"C/C++ header files for use with MyLib"<br />
DEPENDS libraries<br />
)<br />
</syntaxhighlight><br />
<br />
The DEPENDS property for a component is actually a list, as such a component can depend on several other components. By expressing all of the component dependencies in this manner you can ensure that users will not be able to select an incomplete set of components at installation time.<br />
<br />
<br />
=====Grouping Components=====<br />
<br />
When the number of components in your project grows large, you may need to provide additional organization for the list of components. To help with this organization, CPack includes the notion of component groups. A component group is simply a way to provide a name for a group of related components. Within the user interface a component group has its own name, and underneath that group are the names of all of the components in that group. Users will have the option to (de-)select the installation of all components in the group with a single click, or expand the group to select individual components.<br />
<br />
We will expand our example by categorizing its three components, "applications," "libraries," and "headers," into "Runtime" and "Development" groups. We can place a component into a group by using the GROUP option to the cpack_add_component function as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
cpack_add_component (applications<br />
DISPLAY NAME "MyLib Application"<br />
DESCRIPTION<br />
"An extremely useful application that makes use of MyLib"<br />
GROUP Runtime)<br />
cpack_add_component (libraries<br />
DISPLAY_NAME "Libraries"<br />
DESCRIPTION<br />
"Static libraries used to build programs with MyLib"<br />
GROUP Development)<br />
cpack_add_component (headers<br />
DISPLAY_NAME "C++ Headers"<br />
DESCRIPTION "C/C++ header files for use with MyLib"<br />
GROUP Development<br />
DEPENDS libraries<br />
)<br />
</syntaxhighlight><br />
<br />
Like components, component groups have various properties that can be customized, including the DISPLAY_NAME and DESCRIPTION. For example, the following code adds an expanded description to the "Development" group:<br />
<br />
<syntaxhighlight lang="text"><br />
cpack_add_component_group(Development<br />
EXPANDED<br />
DESCRIPTION<br />
"All of the tools you'll ever need to develop software")<br />
</syntaxhighlight><br />
<br />
Once you have customized the component groups to your liking, rebuild the binary installer to see the new organization: the MyLib application will show up under the new "Runtime" group, while the MyLib library and C++ header will show up under the new "Development" group. One can easily turn on/off all of the components within a group using the installer's GUI. This can be seen in Figure 9.12.<br />
<br />
<br />
=====Installation Types (NSIS Only)=====<br />
<br />
<<Figure 9.12: Component Grouping>><br />
<br />
When a project contains a large number of components, it is common for a Windows installer to provide pre-selected sets of components based on specific user needs. For example, a user wanting to develop software against a library will want one set of components, while an end user might use an entirely different set. CPack supports this notion of pre-selected component sets via instal lation types. An installation type is simply a set of components. When the user selects an installation type, exactly that set of components is selected. Then the user is permitted to further customize their installation as desired. Currently this is only supported by the NSIS generator.<br />
<br />
For our simple example, we will create two installation types: a "Full" installation type that contains all of the components, and a "Developer" installation type that includes only the libraries and headers. To do this we use the function cpack_add_install_type to add the types.<br />
<br />
<syntaxhighlight lang="text"><br />
cpack_add_install_type(Full DISPLAY_NAME "Everything")<br />
cpack_add_install_type (Developer)<br />
</syntaxhighlight><br />
<br />
Next, we set the INSTALL_TYPES property of each component to state which installation types will include that component. This is done with the INSTALL_TYPES option to the cpack_add_component function.<br />
<br />
<syntaxhighlight lang="text"><br />
cpack_add_component (libraries DISPLAY_NAME "Libraries"<br />
DESCRIPTION<br />
"Static libraries used to build programs with MyLib"<br />
GROUP Development<br />
INSTALL_TYPES Developer Full)<br />
cpack_add_component (applications<br />
DISPLAY_NAME "MyLib Application"<br />
DESCRIPTION<br />
"An extremely useful application that makes use of MyLib"<br />
GROUP Runtime<br />
INSTALL_TYPES Full)<br />
cpack_add_component (headers<br />
DISPLAY_NAME "C++ Headers"<br />
DESCRIPTION "C/C++ header files for use with MyLib"<br />
GROUP Development<br />
DEPENDS libraries<br />
INSTALL_TYPES Developer Full)<br />
</syntaxhighlight><br />
<br />
Components can be listed under any number of installation types. If you rebuild the Windows installer, the components page will contain a combo box that allows you to select the installation type, and therefore its corresponding set of components as shown in Figure 9.13.<br />
<br />
<br />
=====Variables that control CPack components=====<br />
<br />
The functions cpack_add_install_type, cpack_add_component_group, and cpack_add_component just set CPACK_ variables. Those variables are described in the following list:<br />
<br />
'''CPACK_COMPONENTS_ALL''' This is a list containing the names of all components that should be installed by CPack. The presence of this macro indicates that CPack should build a component-based installer. Files associated with any components not listed here or any installation commands not associated with any component will not be installed.<br />
<br />
'''CPACK_COMPONENT_${COMPNAME}_DISPLAY_NAME''' The displayed name of the component ${COMPNAME}, used in graphical installers to display the component name. This value can be any string.<br />
<br />
'''CPACK_COMPONENT_${COMPNAME}_DESCRIPTION''' An extended description of the component ${COMPNAME}, used in graphical installers to give the user additional information about the component. Descriptions can span multiple lines using "n" as the line separator.<br />
<br />
'''CPACK_COMPONENT_${COMPNAME}_HIDDEN''' A flag that indicates that this component will be hidden in the graphical installer, and therefore cannot be selected or installed. Only avai lable with NSIS.<br />
<br />
'''CPACK_COMPONENT_${COMPNAME}_REQUIRED''' A flag that indicates that this component is required, and therefore will always be installed. It will be visible in the graphical installer but it cannot be unselected.<br />
<br />
'''CPACK_COMPONENT_${COMPNAME}_DISABLED''' A flag that indicates that this component should be disabled (unselected) by default. The user is free to select this component for installation.<br />
<br />
'''CPACK_COMPONENT_${COMPNAME}_DEPENDS''' Lists the components on which this component depends. If this component is selected, then each of the components listed must also be selected.<br />
<br />
'''CPACK_COMPONENT_${COMPNAME}_GROUP''' Names a component group that this component is a part of. If not provided, the component will be a standalone component, not part of any component group.<br />
<br />
'''CPACK_COMPONENT_${COMPNAME}_INSTALL_TYPES''' Lists the installation types that this component is a part of. When one of these installations types is selected, this component will automatically be selected. Only available with NSIS.<br />
<br />
'''CPACK_COMPONENT_GROUP_${GROUPNAME}_DISPLAY_NAME''' The displayed name of the component group ${GROUPNAME}, used in graphical installers to display the component group name. This value can be any string.<br />
<br />
'''CPACK_COMPONENT_GROUP_${GROUPNAME}_DESCRIPTION''' An extended description of the component group ${GROUPNAME}, used in graphical installers to give the user additional information about the components contained within this group. Descriptions can span multiple lines using "n" as the line separator.<br />
<br />
'''CPACK_COMPONENT_GROUP_${GROUPNAME}_BOLD_TITLE''' A flag indicating whether the group title should be in bold. Only available with NSIS.<br />
<br />
'''CPACK_COMPONENT_GROUP_${GROUPNAME}_EXPANDED''' A flag indicating whether the group should start out "expanded", showing its components. Otherwise only the group name itself will be shown until the user clicks on the group. Only available with NSIS .<br />
<br />
'''CPACK_INSTALL_TYPE_${INSTNAME}_DISPLAY_NAME The displayed name of the installation type. This value can be any string.<br />
<br />
<br />
===CPack for Cygwin Setup===<br />
<br />
Cygwin (http://www.cygwin.com/) is a Linux-like environment for Windows that consists of a run time DLL and a collection of tools. To add tools to the official cygwin, the cygwin setup program is used. The setup tool has very specific layouts for the source and binary trees that are to be included. CPack can create the source and binary tar files and correctly bzip them so that they can be uploaded to the cygwin mirror sites. You must of course have your package accepted by the cygwin community before that is done. Since the layout of the package is more restrictive than other packaging tools, you may have to change some of the install options for your project.<br />
<br />
The cygwin setup program requires that all files be installed into /usr/bin, /usr/share/package-version, /usr/share/man and /usr/share/doc/package-version. The cygwin CPack generator will automatically add the /usr to the install directory for the project. The project must install things into share and bin, and CPack will add the /usr prefix automatically.<br />
<br />
Cygwin also requires that you provide a shell script that can be used to create the package from the sources. Any cygwin specific patches that are required for the package must also be provided in a diff file. CMake's configure_file command can be used to create both of these files for a project. Since CMake is a cygwin package, the CMake code used to configure CMake for the cygwin CPack generators is as follows<br />
<br />
<syntaxhighlight lang="text"><br />
set (CPACK_PACKAGE_NAME CMake)<br />
<br />
# setup the name of the package for cygwin<br />
set (CPACK_PACKAGE_FILE_NAME<br />
"${CPACK_PACKAGE_NAME}-${CMake_VERSION}")<br />
<br />
# the source has the same name as the binary<br />
set (CPACK_SOURCE_PACKAGE_FILE_NAME ${CPACK_PACKAGE_FILE_NAME})<br />
<br />
# Create a cygwin version number in case there are changes<br />
# for cygwin that are not reflected upstream in CMake<br />
set (CPACK_CYGWIN_PATCH_NUMBER 1)<br />
<br />
# if we are on cygwin and have cpack, then force the<br />
# doc, data and man dirs to conform to cygwin style directories<br />
set (CMAKE_DOC_DIR "/share/doc/${CPACK_PACKAGE_FILE_NAME}")<br />
set (CMAKE_DATA_DIR "/share/${CPACK_PACKAGE_FILE_NAME}")<br />
set (CMAKE_MAN_ DIR "/share/man")<br />
<br />
# These files are required by the cmCPackCygwinSourceGenerator and<br />
# the files put into the release tar files.<br />
set (CPACK_CYGWIN_BUILD_SCRIPT<br />
"${CMake_BINARY_DIR}/@CPACK_PACKAGE_FILE_NAME@-<br />
@CPACK_CYGWIN_PATCH_NUMBER@.sh")<br />
set (CPACK_CYGWIN_PATCH FILE<br />
"${CMake_BINARY_DIR}/@CPACK_PACKAGE_FILE_NAME@-<br />
@CPACK_CYGWIN_PATCH_NUMBER@.patch")<br />
<br />
# include the sub directory for cygwin releases<br />
include (Utilities/Release/Cygwin/CMakeLists.txt)<br />
<br />
# when packaging source make sure to exclude the .build directory<br />
set (CPACK_SOURCE_IGNORE_FILES<br />
"/CVS/" "/\\\\.build/" "/\\\\.svn/" "\\\\.swp$" "\\\\.#" "/#" "~$")<br />
</syntaxhighlight><br />
<br />
Utilities/Release/Cygwin/CMakeLists.txt:<br />
<br />
<syntaxhighlight lang="text"><br />
# create the setup-hint file for cygwin<br />
configure_file (<br />
"${CMake_SOURCE_DIR}/Utilities/Release/Cygwin/cygwin-setup.hint.in"<br />
"${CMake_BINARY_DIR}/setup.hint")<br />
<br />
configure_file (<br />
"${CMake_SOURCE_DIR}/Utilities/Release/Cygwin/README.cygwin.in"<br />
"${CMake_BINARY_DIR}/Docs/@CPACK_PACKAGE_FILE_NAME@-<br />
@CPACK_CYGWIN_PATCH_NUMBER@.README" )<br />
<br />
install_files (/share/doc/Cygwin FILES<br />
${CMake_BINARY_DIR}/Docs/@CPACK_PACKAGE_FILE_NAME@-<br />
@CPACK_CYGWIN_PATCH_NUMBER@.README)<br />
<br />
# create the shell script that can build the project<br />
configure_file (<br />
"${CMake_SOURCE_DIR}/Utilities/Release/Cygwin/cygwin-package.sh.in"<br />
${CPACK_CYGWIN_BUILD_SCRIPT})<br />
<br />
# Create the patch required for cygwin for the project<br />
configure_file (<br />
"${CMake_SOURCE_DIR}/Utilities/Release/Cygqwin/cygwin-patch.diff.in"<br />
${CPACK_CYGWIN_PATCH_FILE})<br />
</syntaxhighlight><br />
<br />
The file Utilities/Release/Cygwin/cygwin-package.sh.in can be found in the CMake source tree. It is a shell script that can be used to re-create the cygwin package from source. For other projects, there is a template install script that can be found in Templates/cygwin-package.sh.in. This script should be able to configure and package any cygwin based CPack project, and it is required for all official cygwin packages.<br />
<br />
Another important file for cygwin binaries is share/doc/Cygwin/package-version.README. This file should contain the information required by cygwin about the project. In the case of CMake, the file is configured so that it can contain the correct version information. For example, part of that file for CMake looks like this:<br />
<br />
<syntaxhighlight lang="text"><br />
Build instructions:<br />
unpack CMake-2.5.20071029-1-src.tar.bz2<br />
if you use setup to install this srCPackage, it will be<br />
unpacked under /usr/src automatically<br />
cd /usr/sre<br />
./CMake-2.5.20071029-1.sh all<br />
This will create:<br />
/usr/src/CMake-2.5.20071029.tar.bz22<br />
/usr/src/CMake-2.5.20071029-1-src.tar.bz2<br />
</syntaxhighlight><br />
<br />
<br />
===CPack for Mac OS X PackageMaker===<br />
<br />
On the Apple Mac OS X operating system, CPack provides the ability to use the system PackageMaker tool. This section will show the CMake application install screens users will see when installing the CMake package on OS X. The CPack variables set to change the text in the installer will be given for each screen of the installer.<br />
<br />
<<Figure 9.14: MaCPackage inside .dmg>><br />
<br />
In Figure 9.14, the .pkg file found inside the .dmg disk image created by the CPack package maker for Mac OS X is seen. The name of this file is controlled by the CPACK_PACKAGE_FILE_NAME variable. If this is not set, CPack will use a default name based on the package name and version settings.<br />
<br />
When the .pkg file is run, the package wizard starts with the screen seen in Figure 9.15. The text in this window is controlled by the file pointed to by the CPACK_RESOURCE_FILE_WELCOME variable.<br />
<br />
The figure above shows the read me section of the package wizard. The text for this window is customized by using the CPACK_RESOURCE_FILE_README variable. It should contain a path to the file containing the text that should be displayed on this screen.<br />
<br />
This figure contains the license text for the package. Users must accept the license for the installation process to continue. The text for the license comes from the file pointed to by the CPACK_RESOURCE_FILE_LICENSE variable.<br />
<br />
The other screens in the installation process are not customizable from CPack. more advanced features of this installer, there are two CPack templates that you can modify, Modules/CPack.Info.plist.in and Modules/CPack.Description.plist.in. These files can be replaced by using the CMAKE_MODULE_PATH variable to point to a directory in your project containing a modified copy of either or both.<br />
<br />
<<Figure 9.15: Introduction Screen MaCPackageMaker>><br />
<br />
<<Figure 9.16: Readme section of MaCPackage wizard>><br />
<br />
<<Figure 9.17: License screen MaCPackager>><br />
<br />
<br />
===CPack for Mac OS X Drag and Drop===<br />
<br />
CPack also supports the creation of a Drag and Drop installer for the Mac. In this case a .dmg disk image is created. The image contains both a symbolic link to the /Applications directory and a copy of the project's install tree. In this case it is best to use a Mac application bundle or a single folder containing your relocatable installation as the only install target for the project. The variable CPACK_PACKAGE_EXECUTABLES is used to point to the application bundle for the project.<br />
<br />
<<Figure 9.18: Drag and Drop License dialog>><br />
<br />
<br />
===CPack for Mac OS X X11 Applications===<br />
<br />
<<Figure 9.19: Resulting Drag and Drop folders>><br />
<br />
CPack also includes an OS X X11 package maker generator. This can be used to package X11 based applications, as well as make them act more like native OS X applications by wrapping them with a script that will allow users to run them as they would any native OS X application. Much like the OS X PackageMaker generator, the OS X X11 generator creates a disk image .dmg file. In this example, an X11 application called KWPolygonalObjectViewerExample is packaged with the OS X X11 CPack generator.<br />
<br />
<<Figure 9.20: Mac OS X X11 package disk image>><br />
<br />
This figure shows the disk image created. In this case the CPACK_PACKAGE_NAME was set to KWPolygonalObjectViewerExample, and the version information was left with the CPack default of 0.1.1. The variable CPACK_PACKAGE_EXECUTABLES was set to the pair KWPolygonalObjectViewerExample and KWPolygonalObjectViewerExample, the installed X11 application is called KWPolygonalObjectViewerExample.<br />
<br />
The above figure shows what a user would see after clicking on the .dmg file created by CPack. Mac OS X is mounting this disk image as a disk<br />
<br />
<<Figure 9.21: Opening OS X X11 disk image>><br />
<br />
<<Figure 9.22: Mounted .dmg disk image>><br />
<br />
This figure shows the mounted disk image. It will contain a symbolic link to the /Applications directory for the system, and it will contain an application bundle for each executable found in CPACK_PACKAGE_EXECUTABLES. The users can then drag and drop the applications into the Applications folder as seen in the figure below.<br />
<br />
<<Figure 9.23: Drag and drop application to Applications>><br />
<br />
CPack actually provides a C++ based executable that can run an X11 application via the Apple scripting language. The application bundle installed will run that forwarding application when the user double clicks on KWPolygonalObjectViewerExample. This script will make sure that the X11 server is started. The script that is run can be found in CMake/Modules/CPack.RuntimeScript.in. The source for the script launcher C++ program can be found in Source/CPack/OSXScriptLauncher.cxx.<br />
<br />
<br />
===CPack for Debian Packages===<br />
<br />
A Debian package .deb is simply an "ar" archive. CPack includes the code for the BSD style ar that is required by Debian packages. The Debian packager uses the standard set of CPack variables to initialize a set of Debian specific variables. These can be overridden in the CPACK_PROJECT_CONFIG_FILE; the name of the generator is "DEB". The variables used by the DEB generator are as follows:<br />
<br />
'''CPACK_DEBIAN_PACKAGE_NAME''' defaults to lower case of CPACK_PACKAGE_NAME .<br />
<br />
'''CPACK_DEBIAN_PACKAGE_ARCHITECTURE''' defaults to i386.<br />
<br />
'''CPACK_DEBIAN_PACKAGE_DEPENDS''' This must be set to other packages that this package depends on, and if empty a warning is emitted.<br />
<br />
'''CPACK_DEBIAN_PACKAGE_MAINTAINER''' defaults to value of CPACK_PACKAGE_CONTACT<br />
<br />
'''CPACK_DEBIAN_PACKAGE_DESCRIPTION''' defaults to value of CPACK_PACKAGE_DESCRIPTION_SUMMARY<br />
<br />
'''CPACK_DEBIAN_PACKAGE_SECTION''' defaults to devel<br />
<br />
'''CPACK_DEBIAN_PACKAGE_PRIORITY''' defaults to optional<br />
<br />
<br />
===CPack for RPM===<br />
<br />
CPack has support for creating Linux RPM files. The name of the generator as set in CPACK_GENERATOR is "RPM". The RPM package capability requires that rpmbuild is installed on the machine and is in PATH. The RPM packager uses the standard set of CPack variables to initialize RPM specific variables. The RPM specific variables are as follows:<br />
<br />
'''CPACK_RPM_PACKAGE_SUMMARY''' defaults to value of CPACK_PACKAGE_DESCRIPTION_SUMMARY<br />
<br />
'''CPACK_RPM_PACKAGE_NAME''' defaults to lower case of CPACK_PACKAGE_NAME<br />
<br />
'''CPACK_RPM_PACKAGE_VERSION''' defaults to value of CPACK_PACKAGE_VERSION.<br />
<br />
'''CPACK_RPM_PACKAGE_ARCHITECTURE''' defaults to i386<br />
<br />
'''CPACK_RPM_PACKAGE_RELEASE''' defaults to 1. This is the version of the RPM file, not the version of the software being packaged.<br />
<br />
'''CPACK_RPM_PACKAGE_GROUP''' defaults to none.<br />
<br />
'''CPACK_RPM_PACKAGE_VENDOR''' defaults to value of CPACK_PACKAGE_VENDOR<br />
<br />
<br />
===CPack Files===<br />
<br />
There are a number of files that are used by CPack that can be useful for learning more about how CPack works and what options you can set. These files can also be used as the starting point for other generators for CPack. These files can mostly be found in the Modules and Templates directories of CMake and typically start with the prefix CPack. As of version 2.8.8, you may also refer to cpack --help-variable-list and cpack --help-variable for the full set of documented CPACK_* variables.<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:MastringCmakeVersion31]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=MastringCmakeVersion31:Chapter_08&diff=5606MastringCmakeVersion31:Chapter 082020-09-21T12:02:59Z<p>Onionmixer: CMAKE Chapter 8</p>
<hr />
<div>==CHAPTER EIGHT::CROSS COMPILING WITH C MAKE==<br />
<br />
Cross-compiling a piece of software means that the software is built on one system, but is intended to run on a different system. The system used to build the software will be called the "build host," and the system for which the software is built will be called the "target system" or "target platform." The target system usually runs a different operating system (or none at all) and/or runs on different hardware. A typical use case is in software development for embedded devices like network switches, mobile phones, or engine control units. In these cases, the target platform doesn't have or is not able to run the required software development environment.<br />
<br />
Starting with CMake 2.6.0, cross-compiling is fully supported by CMake, ranging from cross-compiling from Linux to Windows; cross-compiling for supercomputers, through to cross-compiling for small embedded devices without an operating system (OS).<br />
<br />
Cross-compiling has several consequences for CMake:<br />
<br />
* CMake cannot automatically detect the target platform.<br />
* CMake cannot find libraries and headers in the default system directories.<br />
* Executables built during cross compiling cannot be executed.<br />
<br />
Cross-compiling support doesn't mean that all CMake-based projects can be magically cross-compiled out of-the-box (some are), but that CMake separates between information about the build platform and target platform and gives the user mechanisms to solve cross-compiling issues without additional requirements such as running virtual machines, etc.<br />
<br />
To support cross-compiling for a specific software project, CMake must to be told about the target platform via a toolchain file. The CMakeLists.txt may have to be adjusted so they are aware that the build platform may have different properties than the target platform, and it has to deal with the instances where a compiled executable tries to execute on the build host.<br />
<br />
<br />
===Toolchain Files===<br />
<br />
In order to use CMake for cross-compiling, a CMake file that describes the target platform has to be created, called the "toolchain file," This file tells CMake everything it needs to know about the target platform. Here is an example that uses the MinGW cross-compiler for Windows under Linux; the contents will be explained line-by-line afterwards<br />
<br />
<syntaxhighlight lang="text"><br />
# the name of the target operating system<br />
set (CMAKE_SYSTEM_NAME Windows)<br />
<br />
# which compilers to use for C and C++<br />
set (CMAKE_C_COMPILER i586-mingw32msvc-gcc )<br />
set (CMAKE_CXX_COMPILER i586-mingw32msvc-g++ )<br />
<br />
# where is the target environment located<br />
set (CMAKE_FIND_ROOT_PATH /usr/i586-mingw32msvc<br />
/home/alex/mingw-install )<br />
<br />
# adjust the default behavior of the FIND_XXX() commands:<br />
# search programs in the host environment<br />
set (CMAKE FIND _ROOT_PATH MODE_PROGRAM NEVER)<br />
<br />
# search headers and libraries in the target environment<br />
set (CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)<br />
set (CMAKE_FIND_ROOT PATH MODE INCLUDE ONLY)<br />
</syntaxhighlight><br />
<br />
Assuming that this file is saved with the name TC-mingw.cmake in your home directory, you instruct CMake to use this file by setting the CMAKE_TOOLCHAIN_FILE (page 633) variable:<br />
<br />
<syntaxhighlight lang="text"><br />
~/src$ cd build<br />
~/src/build$ cmake -DCMAKE_TOOLCHAIN_FILE=~/TC-mingw.cmake ..<br />
...<br />
</syntaxhighlight><br />
<br />
'''CMAKE_TOOLCHAIN_FILE''' has to be specified only on the initial CMake run; after that, the results are reused from the CMake cache. You don't need to write a separate toolchain file for every piece of software you want to build. The toolchain files are per target platform; i.e. if you are building several software packages for the same target platform, you only have to write one toolchain file that can be used for all packages. What do the settings in the toolchain file mean? We will examine them one-by-one. Since CMake cannot guess the target operating system or hardware, you have to set the following CMake variables:<br />
<br />
'''CMAKE_SYSTEM_NAME''' (page 653) This variable is mandatory; it sets the name of the target system, i.e. to the same value as CMAKE_SYSTEM_NAME would have if CMake were run on the target system. Typical examples are "Linux" and "Windows." It is used for constructing the file names of the platform files like Linux.cmake or Windows-gcc.cmake. If your target is an embedded system without an OS, set CMAKE_SYSTEM_NAME to "Generic." Presetting CMAKE_SYSTEM_NAME this way instead of being detected, automatically causes CMake to consider the build a cross-compiling build and the CMake variable CMAKE_CROSSCOMPILING (page 626) will be set to TRUE. CMAKE_CROSSCOMPILING is the variable that should be tested in CMake files to determine whether the current build is a cross compiled build or not.<br />
<br />
'''CMAKE_SYSTEM_VERSION''' (page 653) This variable is optional; it sets the version of your target system. CMake does not currently use CMAKE_SYSTEM_VERSION.<br />
<br />
'''CMAKE_SYSTEM_PROCESSOR''' (page 653) This variable is optional; it sets the processor or hardware name of the target system. It is used in CMake for one purpose, to load the<br />
<br />
<syntaxhighlight lang="text"><br />
$(CMAKE_SYSTEM_NAME)-COMPILER_ID-$(CMAKE_SYSTEM_PROCESSOR).cmake<br />
</syntaxhighlight><br />
<br />
file. This file can be used to modify settings such as compiler flags for the target. You should only have to set this variable if you are using a cross-compiler where each target needs special build settings. The value can be chosen freely, so it could be, for example, i386, IntelPXA255, or MyControlBoardRev42.<br />
<br />
In CMake code, the CMAKE_SYSTEM_XXX variables always describe the target platform. The same is true for the short WIN32, UNIX, APPLE variables. These variables can be used to test the properties of the target. If it is necessary to test the build host system, there is a corresponding set of variables:<br />
<br />
CMAKE_HOST_SYSTEM (page 651), CMAKE_HOST_SYSTEM_NAME (page 651), ;variable: CMAKE_HOST_SYSTEM_VERSION,CMAKE_HOST_SYSTEM_PROCESSOR (page 651); and also the short forms: CMAKE_HOST_WIN32 (page 652), CMAKE_HOST_UNIX (page 652) and CMAKE_HOST_APPLE (page 651).<br />
<br />
Since CMake cannot guess the target system, it cannot guess which compiler it should use. Setting the following variables defines what compilers to use for the target system.<br />
<br />
'''CMAKE_C_COMPILER''' This specifies the C compiler executable as either a full path or just the filename. If it is specified with full path, then this path will be preferred when searching for the C++ compiler and the other tools (binutils, linker, etc.). If the compiler is a GNU cross-compiler with a prefixed name (e.g. "arm-elf-gee"), CMake will detect this and automatically find the corresponding C++ compiler (i.e. "arm-elf-c++"). The compiler can also be set via the CC environment variable. Setting CMAKE_C_COMPILER directly in a toolchain file has the advantage that the information about the target system is completely contained in this file, and it does not depend on environment variables.<br />
<br />
'''CMAKE_CXX_COMPILER''' This specifies the C++ compiler executable as either a ful l path or just the file name. It is handled the same way as CMAKE_C_COMPILER. If the toolchain is a GNU toolchain, it should suffice to set only CMAKE_C_COMPILER; CMake should find the corresponding C++ compiler automatically. As for CMAKE_C_COMPILER, also for C++ the compiler can be set via the CXX environment variable.<br />
<br />
Once the system and the compiler are determined by CMake, it will load the corresponding files in the order described in Chapter 11 in the section called The Enable Language Process.<br />
<br />
<br />
===Finding External Libraries, Programs and Other Files===<br />
<br />
Most non-trivial projects make use of external libraries or tools. CMake offers the find_program, find_library, find_file, find_path, and find_package commands for this purpose. They search the file system in common places for these files and return the results. find_package is a bit different in that it does not actually search itself, but executes Find<*>.cmake modules, which in tum call the find_program, find_library, find_file, and find_path commands.<br />
<br />
When cross-compiling, these commands become more complicated. For example, when cross-compiling to Windows on a Linux system, getting /usr/lib/libjpeg.so as the result of the command find_package(JPEG) would be useless, since this would be the JPEG library for the host system and not the target system. In some cases, you want to find files that are meant for the target platform; in other cases you will want to find files for the build host. The following variables are designed to give you the flexibility to change how the typical find commands in CMake work, so that you can find both build host and target files as necessary.<br />
<br />
The toolchain will come with its own set of libraries and headers for the target platform, which are usually installed under a common prefix. It is a good idea to set up a directory where all the software that is built for the target platform will be installed, so that the software packages don't get mixed up with the libraries that come with the toolchain.<br />
<br />
The find_program() (page 306) command is typically used to find a program that will be executed during the build, so this should still search in the host file system, and not in the environment of the target platform. find_library is normally used to find a library that is then used for linking purposes, so this command should only search in the target environment. For find_path() (page 303) and find_file() (page 292), it is not so obvious; in many cases, they are used to search for headers, so by default they should only search in the target environment. The fol lowing CMake variables can be set to adjust the behavior of the find commands for cross-compiling.<br />
<br />
'''CMAKE_FIND_ROOT_PATH''' (page 643) This is a list of the directories that contain the target environment. Each of the directories listed here will be prepended to each of the search directories of every find command. Assuming your target environment is installed under /opt/eldk/ppc_74xx and your installation for that target platform goes to ~/install-eldk-ppc74xx, set CMAKE_FIND_ROOT_PATH to these two directories. Then find_library (JPEG_LIB jpeg) will search in /opt/eldk/ppc_74xx/lib, /opt/eldk/ppc_74xx/usr/lib, ~/install-eldk-ppc74xx/lib, ~/install-eldk-ppc74xx/usr/lib, and should result in /opt/eldk/ppc_74xx/usr/lib/libjpeg.so.<br />
<br />
By default, CMAKE_FIND_ROOT_PATH is empty. If set, first the directories prefixed with the path given in CMAKE_FIND_ROOT_PATH will be searched, and then the unprefixed versions of the same directories will be searched.<br />
<br />
By setting this variable, you are basically adding a new set of search prefixes to all of the find commands in CMake, but for some find commands you may not want to search the target or host directories. You can control how each find command invocation works by passing in one of the three following options NO_CMAKE_FIND_ROOT_PATH, ONLY_CMAKE_FIND_ROOT_PATH, or CMAKE_FIND_ROOT_PATH_BOTH when you call it. You can also control how the find commands work using the following three variables.<br />
<br />
'''CMAKE_FIND_ROOT_PATH_MODE_PROGRAM''' (page 644) This sets the default behavior for the find_program command. It can be set to NEVER, ONLY, or BOTH. The default setting is BOTH. When set to NEVER, CMAKE_FIND_ROOT_PATH will not be used for find_program calls except where it is enabled explicitly. If set to ONLY, only the search directories with the prefixes coming from CMAKE_FIND_ROOT_PATH will be used by find_program. The default is BOTH, which means that first the prefixed directories and then the unprefixed directories, will be searched.<br />
<br />
In most cases, find_program() (page 306) is used to search for an executable which will then be executed, e.g. using execute_process() (page 285) or add_custom_command() (page 269). So in most cases an executable from the build host is required, so setting CMAKE_FIND_ROOT_PATH_MODE_PROGRAM to NEVER is normally preferred.<br />
<br />
'''CMAKE_FIND_ROOT_PATH_MODE_LIBRARY''' (page 643) This is the same as above, but for the find_library command. In most cases this is used to find a library which will then be used for link ing, so a library for the target is required. In most cases, it should be set to ONLY.<br />
<br />
'''CMAKE_FIND_ROOT_PATH_MODE_INCLUDE''' (page 643) This is the same as above and used for both find_path and find_file. In most cases, this is used for finding include directories, so the target environment should be searched. In most cases, it should be set to ONLY. If you also need to find files in the file system of the build host (e.g. some data files that will be processed during the build); you may need to adjust the behavior for those find_path or find_file calls using the NO_CMAKE_FIND_ROOT_PATH, ONLY_CMAKE_FIND_ROOT_PATH and CMAKE_FIND_ROOT_PATH_BOTH options.<br />
<br />
With a toolchain file set up as described, CMake now knows how to handle the target platform and the cross compiler. We should now able to build software for the target platform. For complex projects, there are more issues that must to be taken care of.<br />
<br />
<br />
===System Inspection===<br />
<br />
Most portable software projects have a set of system inspection tests for determining the properties of the (target) system. The simplest way to check for a system feature with CMake is by testing variables. For this purpose, CMake provides the variables UNIX (page 655), WIN32 (page 656), and APPLE (page 650). When cross-compiling, these variables apply to the target platform, for testing the build host platform there are corresponding variables CMAKE_HOST_UNIX (page 652), CMAKE_HOST_WIN32 (page 652), and CMAKE_HOST_APPLE (page 651).<br />
<br />
If this granularity is too coarse, the variables CMAKE_SYSTEM_NAME (page 653), CMAKE_SYSTEM (page 653), CMAKE_SYSTEM_VERSION (page 653), and CMAKE_SYSTEM_PROCESSOR (page 653) can be tested, along with their counterparts CMAKE_HOST_SYSTEM_NAME (page 651), CMAKE_HOST_SYSTEM (page 651), CMAKE_HOST_SYSTEM_VERSION (page 652), and CMAKE_HOST_SYSTEM_PROCESSOR (page 651), which contain the same information, but for the build host and not for the target system.<br />
<br />
<syntaxhighlight lang="text"><br />
if (CMAKE_SYSTEM MATCHES Windows)<br />
message (STATUS "Target system is Windows")<br />
endif ()<br />
<br />
if (CMAKE_HOST_SYSTEM MATCHES Linux)<br />
message (STATUS "Build host runs Linux")<br />
endif ()<br />
</syntaxhighlight><br />
<br />
<br />
====Using Compile Checks====<br />
<br />
In CMake, there are macros such as CHECK_INCLUDE_FILES and CHECK_C_SOURCE_RUNS that are used to test the properties of the platform. Most of these macros internally use either the try_compile() (page 343) or the try_run() (page 344) commands. The try_compile command works as expected when cross-compiling; it tries to compile the piece of code with the cross-compiling toolchain, which will give the expected result.<br />
<br />
All tests using try_run will not work since the created executables cannot normally run on the build host. In some cases, this might be possible (e.g. using virtual machines, emulation layers like Wine or interfaces to the actual target) as CMake does not depend on such mechanisms. Depending on emulators during the build process would introduce a new set of potential problems; they may have a different view on the file system, use other line endings, require special hardware or software, etc.<br />
<br />
If try_run is invoked when cross-compiling, it will first try to compile the software, which will work the same way as when not cross compiling. If this succeeds, it will check the variable CMAKE_CROSSCOMPILING (page 626) to determine whether the resulting executable can be executed or not. If it cannot, it will create two cache variables, which then have to be set by the user or via the CMake cache. Assume the command looks like this:<br />
<br />
<syntaxhighlight lang="text"><br />
try_run (SHARED_LIBRARY_PATH_TYPE<br />
SHARED_LIBRARY_PATH_INFO_COMPILED<br />
${PROJECT_BINARY_DIR}/CMakeTmp<br />
${PROJECT_SOURCE_DIR}/CMake/SharedLibraryPathInfo.cxx<br />
OUTPUT_VARIABLE OUTPUT<br />
ARGS "LDPATH"<br />
)<br />
</syntaxhighlight><br />
<br />
In this example, the source file SharedLibraryPathinfo.cxx will be compiled and if that succeeds, the resulting executable should be executed. The variable SHARED_LIBRARY_PATH_INFO_COMPILED will be set to the result of the build, i.e. TRUE or FALSE. CMake will create a cache variable SHARED_LIBRARY_PATH_TYPE and preset it to PLEASE_FILL_OUT-FAILED_TO_RUN. This variable must be set to what the exit code of the executable would have been if it had been executed on the target. Additionally, CMake will create a cache variable SHARED_LIBRARY_PATH_TYPE_TRYRUN_OUTPUT and preset it to PLEASE_FILL_OUT-NOTFOUND. This variable should be set to the output that the executable prints to stdout and stderr if it were executed on the target. This variable is only created if the try_run command was used with the RUN_OUTPUT_VARIABLE or the OUTPUT_VARIABLE argument. You have to fill in the appropriate values for these variables. To help you with this CMake tries its best to give you useful information. To accomplish this CMake creates a file ${CMAKE_BINARY_DIR}/TryRunResults.cmake, which you can see an example of here:<br />
<br />
<syntaxhighlight lang="text"><br />
# SHARED_LIBRARY_PATH_TYPE<br />
# indicates whether the executable would have been able to run<br />
# on its target platform. If so, set SHARED_LIBRARY_PATH_TYPE<br />
# to the exit code (in many cases 0 for success), otherwise<br />
# enter "FAILED_TO_RUN".<br />
<br />
# SHARED_LIBRARY_PATH_TYPE__TRYRUN_OUTPUT<br />
# contains the text the executable would have printed on<br />
# stdout and stderr. If the executable would not have been<br />
# able to run, set SHARED_LIBRARY_PATH_TYPE__TRYRUN_OUTPUT<br />
# empty. Otherwise check if the output is evaluated by the<br />
# calling CMake code. If so, check what the source file would<br />
# have printed when called with the given arguments.<br />
# The SHARED_LIBRARY_PATH_INFO_COMPILED variable holds the build<br />
# result for this TRY_RUN().<br />
#<br />
# Source file: ~/src/SharedLibraryPathInfo.cxx<br />
# Executable : ~/build/cmTryCompileExec-SHARED_LIBRARY_PATH_TYPE<br />
# Run arguments: LDPATH<br />
# Called from: [1] ~/src/CMakeLists.cmake<br />
<br />
set (SHARED_LIBRARY_PATH_TYPE<br />
"0"<br />
CACHE STRING "Result from TRY_RUN" FORCE)<br />
<br />
set (SHARED_LIBRARY_PATH_TYPE__TRYRUN_OUTPUT<br />
""<br />
CACHE STRING "Output from TRY_RUN" FORCE)<br />
</syntaxhighlight><br />
<br />
You can find all of the variables that CMake could not determine, from which CMake file they were called, the source file, the arguments for the executable, and the path to the executable. CMake will also copy the executables to the build directory; they have the names cmTryCompileExec-<name of the variable>, e.g. in this case cmTryCompileExec-SHARED_LIBRARY_PATH_TYPE. You can then try to run this executable manually on the actual target platform and check the results.<br />
<br />
Once you have these results, they have to be put into the CMake cache. This can be done by using ccmake/cmake-gui/"make edit_cache" and editing the variables directly in the cache. It is not possible to reuse these changes in another build directory or if CMakeCache.txt is removed.<br />
<br />
The recommended approach is to use the TryRunResults.cmake file created by CMake. You should copy it to a safe location (i.e. where it will not be removed if the build directory is deleted) and give it a useful name, e.g. TryRunResults-MyProject-eldk-ppc.cmake. The contents of this file have to be edited so that the set commands set the required variables to the appropriate values for the target system. This file can then be used to preload the CMake cache by using the -C option of cmake:<br />
<br />
<syntaxhighlight lang="text"><br />
src/build/ $ cmake -C ~/TryRunResults-MyProject-eldk-ppc.cmake .<br />
</syntaxhighlight><br />
<br />
You do not have to use the other CMake options again as they are now in the cache. This way you can use MyProjectTryRunResults-eldk-ppc.cmake in multiple build trees, and it can be distributed with your project so that it is easier for other users to cross compile it.<br />
<br />
<br />
===Running Executables Built in the Project===<br />
<br />
In some cases it is necessary that during a build, an executable is invoked that was built earlier in the same build; this is usually the case for code generators and similar tools. This does not work when cross compiling, as the executables are built for the target platform and cannot run on the build host (without the use of virtual machines, compatibility layers, emulators, etc.). With CMake, these programs are created using add_executable, and executed with add_custom_command() (page 269) or add_custom_target() (page 272). The following three options can be used to support these executables with CMake. The old version of the CMake code could look something like this<br />
<br />
<syntaxhighlight lang="text"><br />
add_executable (mygen gen.c)<br />
get_target_property (mygenLocation mygen LOCATION)<br />
add_custom_command (<br />
OUTPUT "${CMAKE_CURRENT_BINARY_DIR}/generated.h"<br />
COMMAND ${mygenLocation}<br />
-o "${CMAKE_CURRENT_BINARY_DIR}/generated.h" )<br />
</syntaxhighlight><br />
<br />
Now we will show how this file can be modified so that it works when cross-compiling. The basic idea is that the executable is built only when doing a native build for the build host, and is then exported as an executable target to a CMake script file. This file is then included when cross-compiling, and the executable target for the executable mygen will be loaded. An imported target with the same name as the original target will be created. Since CMake 2.6 add_custom_command recognizes target names as executables, so for the command in add_custom_command, simply the target name can be used; it is not necessary to use the LOCATION property to obtain the path of the executable:<br />
<br />
<syntaxhighlight lang="text"><br />
if (CMAKE_CROSSCOMPILING)<br />
find package (MyGen)<br />
endif ()<br />
if (NOT CMAKE CROSSCOMPILING)<br />
add_executable (mygen gen.c)<br />
export (TARGETS mygen FILE<br />
"${CMAKE_BINARY_DIR}/MyGenConfig.cmake")<br />
endif ()<br />
<br />
add_custom_command (<br />
OUTPUT "${CMAKE_CURRENT_BINARY_DIR}/generated.h"<br />
COMMAND mygen -o "${CMAKE_CURRENT_BINARY_DIR}/generated.h" )<br />
</syntaxhighlight><br />
<br />
With the CMakeLists.txt modified like this, the project can be cross-compiled. First, a native build has to be done in order to create the necessary mygen executable. After that, the cross-compiling build can begin. The build directory of the native build has to be given to the cross-compiling build as the location of the MyGen package, so that find_package(MyGen) can find it:<br />
<br />
<syntaxhighlight lang="text"><br />
mkdir build-native; cd build-native<br />
cmake ..<br />
make<br />
cd ..<br />
mkdir build-cross; cd build-cross<br />
cmake -DCMAKE_TOOLCHAIN_FILE=MyToolchain.cmake \<br />
-DMyGen_DIR=~/src/build-native/ ..<br />
make<br />
</syntaxhighlight><br />
<br />
This code works, but CMake versions prior to 2.6 will not be able to process it, as they do not know the export command or recognize the target name mygen in add_custom_command. A compatible version that works with CMake 2.4 looks like this:<br />
<br />
<syntaxhighlight lang="text"><br />
if (CMAKE_CROSSCOMPILING)<br />
find package (MyGen)<br />
endif (CMAKE_CROSSCOMPILING)<br />
<br />
if (NOT CMAKE_CROSSCOMPILING)<br />
add_executable (mygen gen.c)<br />
if (COMMAND EXPORT)<br />
export (TARGETS mygen FILE<br />
"${CMAKE_BINARY_DIR}/MyGenConfig.cmake")<br />
endif (COMMAND EXPORT)<br />
endif (NOT CMAKE_CROSSCOMPILING)<br />
<br />
get_target_property (mygenLocation mygen LOCATION)<br />
<br />
add_custom_command (<br />
OUTPUT "S${CMAKE_CURRENT_BINARY_DIR}/generated.h"<br />
COMMAND ${mygenLocation}<br />
-o "${CMAKE CURRENT_BINARY_DIR}/generated.h" ) <br />
</syntaxhighlight><br />
<br />
In this case, the target is only exported if the export command exists and the location of the executable is retrieved using the LOCATION target property.<br />
<br />
<syntaxhighlight lang="text"><br />
mkdir build-native; cd build-native<br />
cmake ..<br />
make<br />
cd ..<br />
mkdir build-cross; cd build-cross<br />
cmake -DCMAKE TOOLCHAIN FILE=MyToolchain.cmake \<br />
-DMyGen_DIR=~/src/build-native/ ..<br />
make<br />
</syntaxhighlight><br />
<br />
The "old" CMake code could also be using the utility_source command:<br />
<br />
<syntaxhighlight lang="text"><br />
subdirs (mygen)<br />
utility_source (MYGEN_LOCATION mygen mygen gen.c)<br />
add_custom_command (<br />
OUTPUT "${CMAKE_CURRENT_BINARY_DIR}/generated.h"<br />
COMMAND ${MYGEN_LOCATION}<br />
-o "${CMAKE_CURRENT_BINARY_DIR}/generated.h" )<br />
</syntaxhighlight><br />
<br />
In this case, the CMake script doesn't have to be changed, but the invocation of CMake is more complicated, as each executable location has to be specified manually:<br />
<br />
<syntaxhighlight lang="text"><br />
mkdir build-native; cd build-native<br />
cmake ..<br />
make<br />
cd ..<br />
mkdir build-cross; cd build-cross<br />
cmake -DCMAKE_TOOLCHAIN_FILE=MyToolchain.cmake<br />
-DMYGEN_LOCATION=~/src/build-native/bin/mygen ..<br />
make<br />
</syntaxhighlight><br />
<br />
<br />
===Cross-Compiling Hello World===<br />
<br />
Now let's actually start with the cross-compiling. The first step is to install a cross-compiling toolchain. If this is already installed, you can skip the next paragraph.<br />
<br />
There are many different approaches and projects that deal with cross-compiling for Linux, ranging from free software projects working on Linux-based PDAs to commercial embedded Linux vendors. Most of these projects come with their own way to build and use the respective toolchain. Any of these toolchains can be used with CMake; the only requirement is that it works in the normal file system and does not expect a "sandboxed" environment, like for example the Scratchbox project.<br />
<br />
An easy-to-use toolchain with a relatively complete target environment is the Embedded Linux Development Toolkit (http://www.denx.de/wiki/DULG/ELDK). It supports ARM, PowerPC, and MIPS as target platforms. ELDK can be downloaded from ftp://ftp. sunet.se/pub/Linux/distributions/eldk/. The easiest way is to down load the ISOs, mount them, and then install them:<br />
<br />
<syntaxhighlight lang="text"><br />
mkdir mount-iso/<br />
sudo mount -tiso9660 mips-2007-01-21.iso mount-iso/ -o loop<br />
cd mount-iso/<br />
./install -d /home/alex/eldk-mips/<br />
...<br />
Preparing... ############################################### [100%]<br />
1:appWeb-mips_4KCle ############################################### [100%]<br />
Done<br />
ls /opt/eldk-mips/<br />
bin eldk_init etc mips_4KC mips_4KCle usr var version<br />
</syntaxhighlight><br />
<br />
ELDK (and other toolchains) can be installed anywhere, either in the home directory or system-wide if there are more users working with them. In this example, the toolchain will now be located in /home/alex/eldk-mips/usr/bin/ and the target environment is in /home/alex/eldk-mips/mips_4KC/.<br />
<br />
Now that a cross-compiling toolchain is installed, CMake has to be set up to use it. As already described, this is done by creating a toolchain file for CMake. In this example, the toolchain file looks like this:<br />
<br />
<syntaxhighlight lang="text"><br />
# the name of the target operating system<br />
set (CMAKE_SYSTEM_NAME Linux)<br />
<br />
# which C and C++ compiler to use<br />
set (CMAKE_C_COMPILER /home/alex/eldk-mips/usr/bin/mips_4KC-gcc)<br />
set (CMAKE_CXX_COMPILER<br />
/home/alex/eldk-mips/usr/bin/mips_4KC-g++)<br />
<br />
# location of the target environment<br />
set (CMAKE_FIND_ROOT_PATH /home/alex/eldk-mips/mips_4KC<br />
/home/alex/eldk-mips-extra-install )<br />
<br />
# adjust the default behavior of the FIND_XXX() commands:<br />
# search for headers and libraries in the target environment,<br />
# search for programs in the host environment<br />
set (CMAKE FIND_ROOT_PATH_MODE_PROGRAM_NEVER)<br />
set (CMAKE_FIND_ROOT_PATH_MODE_LIBRARY_ONLY)<br />
set (CMAKE_FIND_ROOT_PATH_MODE_INCLUDE_ONLY)<br />
</syntaxhighlight><br />
<br />
The toolchain files can be located anywhere, but il is a good idea to put them in a central place so that they can be reused in multiple projects. We will save this file as ~/Toolchains/Toolchain-eldk-mips4K.cmake. The variables mentioned above are set here: CMAKE_SYSTEM_NAME, the CIC++ compilers, and CMAKE_FIND_ROOT_PATH to specify where libraries and headers for the target environment are located. The find modes are also set up so that libraries and headers are searched for in the target environment only, whereas programs are searched for in the host environment only. Now we will cross-compile the hello world project from Chapter 2<br />
<br />
<syntaxhighlight lang="text"><br />
project (Hello)<br />
add_executable (Hello Hello.c)<br />
</syntaxhighlight><br />
<br />
Run CMake, this time telling it to use the toolchain file from above:<br />
<br />
<syntaxhighlight lang="text"><br />
mkdir Hello-eldk-mips<br />
cd Hello-eldk-mips<br />
cmake -DCMAKE_TOOLCHAIN_FILE=~/Toolchains/Toolchain-eldk-mips4K.cmake ..<br />
make VERBOSE=1<br />
</syntaxhighlight><br />
<br />
This should give you an executable that can run on the target platform. Thanks to the VERBOSE=1 option, you should see that the cross-compiler is used. Now we will make the example a bit more sophisticated by adding system inspection and install rules. We will build and install a shared library named Tools, and then build the Hello application which links to the Tools library.<br />
<br />
<syntaxhighlight lang="text"><br />
include (CheckIncludeFiles)<br />
check_include_files (stdio.h HAVE_STDIO_H)<br />
<br />
set (VERSION_MAJOR 2)<br />
set (VERSION_MINOR 6)<br />
set (VERSION_PATCH 0)<br />
<br />
configure_file (config.h.in ${CMAKE_BINARY_DIR}/config.h)<br />
<br />
add_library (Tools SHARED tools.cxx)<br />
set_target_properties (Tools PROPERTIES<br />
VERSION ${VERSION_MAJOR}.${VERSION_MINOR}.${VERSION_PATCH}<br />
SOVERSION ${VERSION_MAJOR})<br />
<br />
install (FILES tools.h DESTINATION include)<br />
install (TARGETS Tools DESTINATION lib)<br />
</syntaxhighlight><br />
<br />
There is no difference in a normal CMakeLists.txt; no special prerequisites are required for cross-compiling. The CMakeLists.txt checks that the header stdio.h is available and sets the version number for the Tools library. These are configured into config.h, which is then used in tools.cxx. The version number is also used to set the version n umber of the Tools library. The library and headers are installed to ${CMAKE_INSTALL_PREFIX (page 645)}/lib and ${CMAKE_INSTALL_PREFIX}/include respectively.<br />
<br />
Running CMake gives this result:<br />
<br />
<syntaxhighlight lang="text"><br />
mkdir build-eldk-mips<br />
cd build-eldk-mips<br />
cmake -DCMAKE_TOOLCHAIN_FILE=~/Toolchains/Toolchain-eldk-mips4K.cmake \<br />
-DCMAKE_INSTALL_PREFIX=~/eldk-mips-extra-install ..<br />
-- The C compiler identification is GNU<br />
-- The CXX compiler identification is GNU<br />
-- Check for working C compiler: /home/alex/eldk-mips/usr/bin/mips_4KC-gcc<br />
-- Check for working C compiler:<br />
/home/alex/eldk-mips/usr/bin/mips_4KC-gcc -- works<br />
-- Check size of voids<br />
-- Check size of voids - done<br />
-- Check for working CXX compiler: /home/alex/eldk-mips/usr/bin/mips_4KC-g++<br />
-- Check for working CXX compiler:<br />
/home/alex/eldk-mips/usr/bin/mips_4KC-g++ -- works<br />
-- Looking for include files HAVE _STDIO_H<br />
-- Looking for include files HAVE_STDIO_H - found<br />
-- Configuring done<br />
-- Generating done<br />
-- Build files have been written to: /home/alex/src/tests/Tools/build-mips<br />
make install<br />
Scanning dependencies of target Tools<br />
[100%] Building CXX object CMakeFiles/Tools.dir/tools.o<br />
Linking CXX shared library libTools.so<br />
[100%] Built target Tools<br />
Install the project...<br />
-- Install configuration: ""<br />
-- Installing /home/alex/eldk-mips-extra-install/include/tools.h<br />
-- Installing /home/alex/eldk-mips-extra-install/lib/libTools.so<br />
</syntaxhighlight><br />
<br />
As can be seen in the output above, CMake detected the correct compiler, found the stdio.h header for the target platform, and successfully generated the Makefiles. The make command was invoked, which then successfully built and installed the library in the specified installation directory. Now we can build an executable that uses the Tools library and does some system inspection<br />
<br />
<syntaxhighlight lang="text"><br />
project (HelloTools)<br />
<br />
find_package (ZLIB REQUIRED)<br />
<br />
find_library (TOOLS_LIBRARY Tools)<br />
find_path (TOOLS_INCLUDE_DIR tools.h)<br />
<br />
if (NOT TOOLS_LIBRARY OR NOT TOOLS_INCLUDE_DIR)<br />
message (FATAL_ERROR "Tools library not found")<br />
endif (NOT TOOLS_LIBRARY OR NOT TOOLS_INCLUDE_DIR)<br />
<br />
set (CMAKE_INCLUDE_CURRENT_DIR TRUE)<br />
set (CMAKE_INCLUDE_DIRECTORIES PROJECT_BEFORE TRUE)<br />
<br />
include_directories ("${TOOLS_INCLUDE_DIR}"<br />
"${ZLIB_INCLUDE_DIR}")<br />
<br />
add_executable (HelloTools main.cpp)<br />
target_link_libraries (HelloTools ${TOOLS_LIBRARY }<br />
${ZLIB_LIBRARIES})<br />
set_target_properties (HelloTools PROPERTIES<br />
INSTALL_RPATH_USE_LINK_PATH TRUE)<br />
<br />
install (TARGETS HelloTools DESTINATION bin)<br />
</syntaxhighlight><br />
<br />
Building works in the same way as with the library; the toolchain file has to be used and then it should just work:<br />
<br />
<syntaxhighlight lang="text"><br />
cmake -DCMAKE_TOOLCHAIN_FILE=~/Toolchains/Toolchain-eldk-mips4K.cmake \<br />
-DCMAKE_INSTALL_PREFIX=~/eldk-mips-extra-install ..<br />
-- The C compiler identification is GNU<br />
-- The CXX compiler identification is GNU<br />
-- Check for working C compiler: /home/alex/denx-mips/usr/bin/mips_4KC-gcc<br />
-- Check for working C compiler: <br />
/home/alex/denx-mips/usr/bin/mips_4KC-gcc -- works<br />
-- Check size of void*<br />
-- Check size of void* - done<br />
-- Check for working CXX compiler: /home/alex/denx-mips/usr/bin/mips_4KC-g++<br />
-- Check for working CXX compiler:<br />
/home/alex/denx-mips/usr/bin/mips_4KC-g++ -- works<br />
-- Found ZLIB: /home/alex/denx-mips/mips_4KC/usr/lib/libz.so<br />
-- Found Tools library: /home/alex/denx-mips-extra-install/lib/libTools.so<br />
-- Configuring done<br />
-- Generating done<br />
-- Build files have been written to:<br />
/home/alex/src/tests/HelloTools/build-eldk-mips<br />
make<br />
[100%] Building CXX object CMakeFiles/HelloTools.dir/main.o<br />
Linking CXX executable HelloTools<br />
[100%] Built target HelloTools<br />
</syntaxhighlight><br />
<br />
Obviously CMake found the correct zlib and also libTools.so, which had been installed in the previous step.<br />
<br />
<br />
===Cross-Compiling for a Microcontroller===<br />
<br />
CMake can be used for more than cross-compiling to targets with operating systems, it is also possible to use it in development for deeply-embedded devices with small microcontrollers and no operating system at all. As an example, we will use the Small Devices C Compiler (http://sdcc.sourceforge.net), which runs under Windows, Linux, and Mac OS X, and supports 8 and 16Bit microcontrollers. For driving the build, we will use MS NMake under Windows. As before, the first step is to write a toolchain file so that CMake knows about the target platform. For sdcc, it should look something like this<br />
<br />
<syntaxhighlight lang="text"><br />
set (CMAKE_SYSTEM_NAME Generic)<br />
set (CMAKE_C_COMPILER "c:/Program Files/SDCC/bin/sdcc.exe")<br />
</syntaxhighlight><br />
<br />
The system name for targets that do not have an operating system, "Generic," should be used as the CMAKE_SYSTEM_NAME (page 653). The CMake platform file for "Generic" doesn't set up any specific features. All that it assumes is that the target platform does not support shared libraries, and so all properties will depend on the compiler and CMAKE_SYSTEM_PROCESSOR (page 653). The toolchain file above does not set the FIND-related variables. As long as none of the find commands is used in the CMake commands, this is fine. In many projects for small microcontrollers, this will be the case. The CMakeLists.txt should look like the following<br />
<br />
<syntaxhighlight lang="text"><br />
project (Blink C)<br />
<br />
add_library (blink blink.c)<br />
<br />
add_executable (hello main.c)<br />
target_link_libraries (hello blink)<br />
</syntaxhighlight><br />
<br />
There are no major differences in other CMakeLists.txt files. One important point is that the language "C" is enabled explicitly using the PROJECT command. If this is not done, CMake will also try to enable support for C++, which will fail as sdcc only has support for C. Running CMake and building the project should work as usual:<br />
<br />
<syntaxhighlight lang="text"><br />
cmake -G"NMake Makefiles" \<br />
-DCMAKE_TOOLCHAIN_FILE=c:/Toolchains/Toolchain-sdcc.cmake ..<br />
-- The C compiler identification is SDCC<br />
-- Check for working C compiler: c:/program files/sdcc/bin/sdcc.exe<br />
-- Check for working C compiler: c:/program files/sdcc/bin/sdcc.exe -- works<br />
-- Check size of void*<br />
-- Check size of void* - done<br />
-- Configuring done<br />
-- Generating done<br />
-- Build files have been written to: C:/src/tests/blink/build<br />
<br />
nmake<br />
Microsoft (R) Program Maintenance Utility Version 7.10.3077<br />
Copyright (C) Microsoft Corporation. All rights reserved.<br />
<br />
Scanning dependencies of target blink<br />
[ 50%] Building C object CMakeFiles/blink.dir/blink.rel<br />
Linking C static library blink.lib<br />
[ 50%] Built target blink<br />
Scanning dependencies of target hello<br />
[100%] Building C object CMakeFiles/hello.dir/main.rel<br />
Linking C executable hello.ihx<br />
[100%] Built target hello<br />
</syntaxhighlight><br />
<br />
This was a simple example using NMake with sdcc with the default settings of sdcc. Of course, more sophisticated project layouts are possible. For this kind of project, it is also a good idea to set up an install directory where reusable libraries can be installed, so it is easier to use them in multiple projects. It is normally necessary to choose the correct target platform for sdcc; not everybody uses i8051, which is the default for sdcc. The recommended way to do this is via setting CMAKE_SYSTEM_PROCESSOR.<br />
<br />
This will cause CMake to search for and load the platform file Platform/Generic-SDCC-C${CMAKE_SYSTEM_PROCESSOR}.cmake. As this happens, right before loading Platform/Generic-SDCC-C.cmake, it can be used to set up the compiler and linker flags for the specific target hardware and project. Therefore, a slightly more complex toolchain file is required<br />
<br />
<syntaxhighlight lang="text"><br />
get_filename_component (_ownDir<br />
"${CMAKE_CURRENT_LIST_FILE}" PATH)<br />
set (CMAKE_MODULE_PATH "${_ownDir}/Modules" ${CMAKE_MODULE_PATH})<br />
set (CMAKE_SYSTEM_NAME Generic)<br />
set (CMAKE_C_COMPILER "c:/Program Files/SDCC/bin/sdcc.exe")<br />
set (CMAKE SYSTEM PROCESSOR "Test_DS80C400_Rev_1")<br />
<br />
# here is the target environment located<br />
set (CMAKE_FIND_ROOT_PATH "c:/Program Files/SDCC"<br />
"c:/ds80c400-install" )<br />
<br />
# adjust the default behavior of the FIND_XXX() commands:<br />
# search for headers and libraries in the target environment<br />
# search for programs in the host environment<br />
set (CMAKE_FIND_ROOT_PATH_MODE_PROGRAM NEVER)<br />
set (CMAKE_FIND_ROOT_PATH_MODE_LIBRARY ONLY)<br />
set (CMAKE_FIND_ROOT_PATH_MODE_INCLUDE ONLY)<br />
</syntaxhighlight><br />
<br />
This toolchain file contains a few new settings; it is also about the most complicated toolchain file you should ever need. CMAKE_SYSTEM_PROCESSOR is set to Test_DS80C400_Rev_1, an identifier for the specific target hardware. This has the effect that CMake will try to load Platform/Generic-SDCC-C-Test_DS80C400_Rev_1.cmake. As this file does not exist in the CMake system module directory, the CMake variable CMAKE_MODULE_PATH (page 646) has to be adjusted so that this file can be found. If this toolchain file is saved to c:/Toolchains/sdcc-ds400.cmake, the hardware-specific file should be saved in c:/Toolchains/Modules/Platform/. An example of this is shown below:<br />
<br />
<syntaxhighlight lang="text"><br />
set (CMAKE_C_FLAGS)INIT "-mds390 --use-accelerator")<br />
set (CMAKE_EXE_LINKER_FLAGS_INIT "")<br />
</syntaxhighlight><br />
<br />
This will select the DS80C390 as the target platform and add the -use-accelerator argument to the default compile flags. In this example the "NMake Makefiles" generator was used. In the same way e.g. the "MinGW Makefiles" generator could be used for a GNU make from MinGW, or another Windows version of GNU make, are available. At least version 3.78 is required, or the "Unix Makefiles" generator under UNIX. Also, any Makefile-based, IDE-project generators could be used; e.g. the Eclipse, CodeBlocks, or the KDevelop3 generator.<br />
<br />
<br />
===Cross Compiling an Existing Project===<br />
<br />
Existing CMake-based projects may need some work so that they can be cross-compiled; other projects may work without any modifications. One such project is FLTK, the Fast Lightweight Toolkit. We will compile FLTK on a Linux machine using the MinGW cross-compiler for Windows.<br />
<br />
The first step is to install the MinGW cross-compiler. For some Linux distributions, there are ready-to-use binary packages. For Debian, the package name is mingw32. Once this is installed, set up a toolchain file as described above. It should look something like this<br />
<br />
<syntaxhighlight lang="text"><br />
# the name of the target operating system<br />
set (CMAKE_SYSTEM_NAME Windows)<br />
<br />
# which compiler to use<br />
set (CMAKE_C_COMPILER i586-mingw32msvc-gcc)<br />
set (CMAKE CXX_COMPILER i586-mingw32msvc-g++)<br />
<br />
# where are the target libraries and headers installed ?<br />
set (CMAKE_FIND_ROOT_PATH /usr/i586-mingw32msvc<br />
/nome/alex/mingw-install )<br />
<br />
# find_program() should by default NEVER search the target tree<br />
# adjust the default behavior of the FIND_XX() commands:<br />
# search for headers and libraries in the target environment<br />
# search for programs in the host environment<br />
set (CMAKE_FIND_ROOT_MODE_PROGRAM NEVER)<br />
set (CMAKE_FIND_ROOT_MODE_LIBRARY ONLY)<br />
set (CMAKE_FIND_ROOT_MODE_INCLUDE ONLY)<br />
</syntaxhighlight><br />
<br />
Once this is working, run CMake with the appropriate options on FLTK:<br />
<br />
<syntaxhighlight lang="text"><br />
mkdir build-mingw<br />
cd build-mingw<br />
cmake -DCMAKE_TOOLCHAIN_FILE=~/Toolchains/Toolchain-mingw32.cmake \<br />
-DCMAKE_INSTALL_PREFIX=~/mingw-install ..<br />
-- The C compiler identification is GNU<br />
-- The CXX compiler identification is GNU<br />
-- Check for working C compiler: /usr/bin/i586-mingw32msvc-gcc<br />
-- Check for working C compiler: /usr/bin/i586-mingw32msvc-gcc -- works<br />
</syntaxhighlight><br />
<br />
In FLTK, the utility_source command is used to build the executable fluid, whose location is put into the CMake variable FLUID_COMMAND. If you intend to run this executable, you need to preload the cache with the full path to a version of that program that can be run on the build host.<br />
<br />
<syntaxhighlight lang="text"><br />
-- Configuring done<br />
-- Generating done<br />
-- Build files have been written to:<br />
/home/alex/src/fltk-1.1.x-r5940/build-mingw<br />
</syntaxhighlight><br />
<br />
Below you can see a warning from CMake about the use of the utility_source() (page 350) command. To find out more, CMake offers the --debug-output argument:<br />
<br />
<syntaxhighlight lang="text"><br />
rm -rf *<br />
cmake -DCMAKE_TOOLCHAIN_FILE=~/Toolchains/Toolchain-mingw32.cmake \<br />
-DCMAKE_INSTALL_PREFIX=~/mingw-install .. --debug-output<br />
...<br />
UTILITY_SOURCE is used in cross compiling mode for<br />
FLUID_COMMAND. If your intention is to run this executable, you<br />
need to preload the cache with the full path to a version of<br />
that program, which runs on this build machine.<br />
Called from: [1] /home/alex/src/fltk-1.1.x-r5940/CMakeLists.txt<br />
</syntaxhighlight><br />
<br />
This tells us that utility_source has been called from /home/alex/src/fltk-1.1.x-r5940/CMakeLists.txt, then CMake processed some more directories, and finally created Makefiles in each subdirectory. Examining the top-level CMakeLists.txt shows the following:<br />
<br />
<syntaxhighlight lang="text"><br />
# Set the fluid executable path<br />
utility_source (FLUID_COMMAND fluid fluid fluid.cxx)<br />
set (FLUID_COMMAND "${FLUID_COMMAND}" CACHE INTERNAL "" FORCE)<br />
</syntaxhighlight><br />
<br />
FLUID_COMMAND is used to hold the path for the executable fluid, which is built by the project. Fluid is used during the build to generate code, so the cross-compiled executable will not work, and instead a native fluid has to be used. In the following example, the variable FLUID_COMMAND is set to the location of a fluid executable for the build host, which is then used in the cross-compiling build to generate code that will be compiled for the target system:<br />
<br />
<syntaxhighlight lang="text"><br />
cmake . -DFLUID_COMMAND=/.../f1tk-1.1.x-r5940/build-native/bin/fluid<br />
...<br />
-- Configuring done<br />
-- Generating done<br />
make<br />
Scanning dependencies of target fltk_zlib<br />
[ 0%] Building C object zlib/CMakeFiles/fltk_zlib.dir/adler32.obj<br />
[ 0%] Building C object zlib/CMakeFiles/fltk_zlib.dir/compress.obj<br />
...<br />
Scanning dependencies of target valuators<br />
[100%] Building CXX object test/CMakeFiles/valuators.dir/valuators.obj<br />
Linking CXX executable ../bin/valuators.exe<br />
[100%] Built target valuators<br />
</syntaxhighlight><br />
<br />
That's it, the executables are now in mingw-bin/, and can be run via wine or by copying them to a Windows system.<br />
<br />
<br />
===Cross-Compiling a Complex Project - VTK===<br />
<br />
Building a complex project is a multi-step process. Complex in this case means that the project uses tests that run executables, and that it builds executables which are used later in the build to generate code (or something similar). One such project is VTK, which uses several try_run() (page 344) tests and creates several code generators. When running CMake on the project, every try_run command will produce an error message; at the end there will be a TryRunResults. cmake file in the build directory. You need to go through all of the entries of this file and fill in the appropriate values. If you are uncertain about the correct result, you can also try to execute the test binaries on the real target platforrn, where they are saved in the binary directory.<br />
<br />
VTK contains several code generators, one of which is ProcessShader. These code generators are added using add_executable() (page 273); get_target_property(LOCATION)" is used to get the locations of the resulting binaries, which are then used in add_custom_command() (page 269) or add_custom_target() (page 272) commands. Since the cross-compiled executables cannot be executed during the build, the add_executable() (page 273) calls are surrounded by if (NOT CMAKE_CROSSCOMPILING) commands and the executable targets are imported into the project using the add_executable command with the IMPORTED option. These import statements are in the file VTKCompileToolsConfig.cmake, which does not have to be created manually, but it is created by a native build of VTK.<br />
<br />
In order to cross-compile VTK, you need to:<br />
<br />
* Install a toolchain and create a toolchain file for CMake.<br />
* Build VTK natively for the build host.<br />
* Run CMake for the target platforrn.<br />
* Complete TryRunResults.cmake.<br />
* Use the VTKCompileToolsConfig.cmake file from the native build.<br />
* Build.<br />
<br />
So first, build a native VTK for the build host using the standard procedure.<br />
<br />
<syntaxhighlight lang="text"><br />
cvs -d :pserver:anonymous@public.kitware.com:/cvsroot/VTK co VTK<br />
cd VTK<br />
mkdir build-native; cd build-native<br />
ccmake ..<br />
make<br />
</syntaxhighlight><br />
<br />
Ensure that all required options are enabled using ccmake; e.g. if you need Python wrapping for the target platform, you must enable Python wrapping in build-native/. Once this build has finished, there will be a VTKCompileToolsConfig.cmake file in build-native/. If this succeeded, we can continue to cross compiling the project, in this example for an IBM BlueGene supercomputer.<br />
<br />
<syntaxhighlight lang="text"><br />
cd VTK<br />
mkdir build-bg1-gcc<br />
cd build-bg1-gcc<br />
cmake -DCMAKE_TOOLCHAIN_FILE=~/Toolchains/Toolchain-BlueGeneL-gcc.cmake \<br />
-DVTKCompileTools_DIR=~/VTK/build-native/ ..<br />
</syntaxhighlight><br />
<br />
This will finish with an error message for each try_run and a TryRunResults.cmake file, that you have to complete as described above. You should save the file to a safe location, or it will be overwritten on the next CMake run.<br />
<br />
<syntaxhighlight lang="text"><br />
cp TryRunResults.cmake ../TryRunResults-VTK-BlueGeneL-gcc.cmake<br />
ccmake -C ../TryRunResults-VTK-BlueGeneL-gcc.cmake .<br />
...<br />
make<br />
</syntaxhighlight><br />
<br />
On the second run of ccmake, all the other arguments can be skipped as they are now in the cache. It is possible to point CMake to the build directory that contains a CMakeCache.txt, so CMake will figure out that this is the build directory.<br />
<br />
<br />
===Some Tips and Tricks===<br />
<br />
====Dealing with try_run tests====<br />
<br />
In order to make cross compiling your project easier, try to avoid try_run() (page 344) tests and use other methods to test something instead. For examples of how this can be done, consider the tests for endianess in CMake/Modules/TestBigEndian. cmake, and the test for the compiler id using the source file CMake/Modules/CMakeCCompilerid.c. In both, try_compile() (page 343) is used to compile the source file into an executable, where the desired information is encoded into a text string. Using the COPY_FILE option of try_compile, this executable is copied to a temporary location and then all strings are extracted from this file using file() (page 287) (STRINGS). The test result is obtained using regular expressions to get the information from the string.<br />
<br />
If you cannot avoid try_run tests, try to use only the exit code from the run and not the output of the process. That way it will not be necessary to set both the exit code and the stdout and stderr variables for the try_run test when cross-compiling. This allows the OUTPUT_VARIABLE or the RUN_OUTPUT_VARIABLE options for try_run to be omitted.<br />
<br />
If you have done that, created and completed a correct TryRunResults. cmake file for the target platform, you might consider adding this file to the sources of the project, so that it can be reused by others. These files are per-target, per-toolchain.<br />
<br />
====Target platform and toolchain issues====<br />
<br />
If your toolchain is unable to build a simple program without special arguments, like e.g. a linker script file or a memory layout file, the tests CMake does initially will fail. To make it work the CMake module CMakeForceCompiler offers the following macros:<br />
<br />
<syntaxhighlight lang="text"><br />
CMAKE_FORCE_SYSTEM (name version processor),<br />
CMAKE_FORCE_C_COMPILER (compiler compiler_id sizeof_void_p)<br />
CMAKE_FORCE_CXX_COMPILER (compiler compiler_id).<br />
</syntaxhighlight><br />
<br />
These macros can be used in a toolchain file so that the required variables will be preset and the CMake tests avoided.<br />
<br />
<br />
====RPATH handling under UNIX====<br />
<br />
For native builds, CMake builds executables and libraries by default with RPATH. In the build tree, the RPATH is set so that the executables can be run from the build tree; i.e. the RPATH points into the build tree. When installing the project, CMake links the executables again, this time with the RPATH for the install tree, which is empty by default.<br />
<br />
When cross-compiling you probably want to set up RPATH handling differently. As the executable cannot run on the build host, it makes more sense to build it with the install RPATH right from the start. There are several CMake variables and target properties for adjusting RPATH handling.<br />
<br />
<syntaxhighlight lang="text"><br />
set (CMAKE_BUILD_WITH_INSTALL_RPATH TRUE)<br />
set (CMAKE_INSTALL_RPATH "<whatever you need>")<br />
</syntaxhighlight><br />
<br />
With these two settings, the targets will be built with the install RPATH instead of the build RPATH, which avoids the need to link them again when installing. If you don't need RPATH support in your project, you don't need to set CMAKE_INSTALL_RPATH (page 660); it is empty by default.<br />
<br />
Setting CMAKE_INSTALL_RPATH_USE_LINK_PATH (page 660) to TRUE is useful for native builds, since it automatically collects the RPATH from all libraries against which a targets links. For cross-compiling it should be left at the default setting of FALSE, because on the target the automatically generated RPATH will be wrong in most cases; it will probably have a different file system layout than the build host.<br />
<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:MastringCmakeVersion31]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=MastringCmakeVersion31:Chapter_07&diff=5605MastringCmakeVersion31:Chapter 072020-09-21T12:02:17Z<p>Onionmixer: CMAKE Chapter 7</p>
<hr />
<div>==CHAPTER SEVEN::CONVERTING EXISTING SYSTEMS TO CMAKE==<br />
<br />
For many people, the first thing they will d o with CMake is convert an existing project from using an older build system to use CMake. This can be a fairly easy process, but there are a few issues to consider. This section will address those issues and provide some suggestions for effectively converting a project over to CMake. The first issue to consider when converting to CMake is the project's directory structure.<br />
<br />
===Source Code Directory Structures===<br />
<br />
Most small projects will have their source code in either the top level directory or in a directory named src or source. Even if all of the source code is in a subdirectory, we highly recommend creating a CMakeLists file for the top level directory. There are two reasons for this. First, it can be confusing to some people that they must run CMake on the subdirectory of the project, instead of the main directory. Second, you may want to install documentation or other support files from the other directories. By having a CMakeLists file at the top of the project, you can use the add_subdirectory() (page 277) command to step down into the documentation directory where its CMakeLists file can install the documentation (you can have a CMakeLists file for a documentation directory with no targets or source code).<br />
<br />
For projects that have source code in multiple directories, there are a few options. One option that many Makefile-based projects use is to have a single Makefile at the top-level directory that lists all the source files to compile in their subdirectories. For example:<br />
<br />
<syntaxhighlight lang="text"><br />
SOURCES=\<br />
subdir1/foo.cxx \<br />
subdir1/foo2.cxx \<br />
subdir2/gah.cxx \<br />
subdir2/bar.cxx<br />
</syntaxhighlight><br />
<br />
This approach works just as well with CMake using a similar syntax:<br />
<br />
<syntaxhighlight lang="text"><br />
set (SOURCES<br />
subdir1/foo.cxx<br />
subdir1/foo2.cxx<br />
subdir1/gah.cxx<br />
subdir2/bar.cxx<br />
)<br />
</syntaxhighlight><br />
<br />
Another option is to have each subdirectory build a library or libraries that can then be linked into the executables. In those cases, each subdirectory would define its own list of source files and add the appropriate targets. A third option is a mixture of the first two; each subdirectory can have a CMakeLists file that lists its sources, but the top-level CMakeLists file will not use the add_subdirectory command to step into the subdirectories. Instead, the top-level CMakeLists file will use the include() (page 317) command to include each of the subdirectory' s CMakeLists files. For example, a top-level CMakeLists file might include the following code<br />
<br />
<syntaxhighlight lang="text"><br />
# collect the files for subdir1<br />
include (subdir1/CMakeLists.txt)<br />
foreach (FILE ${FILES})<br />
set (subdir1Files ${subdir1Files} subdir1/${FILE})<br />
endforeach (FILE)<br />
<br />
# collect the files for subdir2<br />
include (subdir2/CMakeLists.txt)<br />
foreach (FILE ${FILES})<br />
set (subdir2Files ${subdir2Files} subdir2/${FILE})<br />
endforeach (FILE)<br />
<br />
# add the source files to the executable<br />
add_executable (foo ${subdir1Files} ${subdir2Files})<br />
</syntaxhighlight><br />
<br />
While the CMakeLists files in the subdirectories might look like the following:<br />
<br />
<syntaxhighlight lang="text"><br />
# list the source files for this directory<br />
set (FILES<br />
foo1.cxx<br />
foo2.cxx<br />
)<br />
</syntaxhighlight><br />
<br />
The approach you use is entirely up to you. For large projects, having multiple shared libraries can certainly improve build times when changes are made. For smaller projects, the other two approaches have their advantages. The main suggestion here is to choose a strategy and stick with it.<br />
<br />
<br />
===Build Directories===<br />
<br />
The next issue to consider is where to put the resulting object files, libraries, and executables. There are a few different, commonly used approaches, and some work better with CMake than others. Probably the most common approach is to produce the binary files in the same directory as the source files. For some Windows generators, such as Visual Studio, they are actually kept in a subdirectory matching the selected configuration; e.g. debug, release, etc. CMake supports this approach by default. A closely-related approach is to put the binary files into a separate tree that has the same structure as the source tree. For example, if the source tree looked like the following:<br />
<br />
<syntaxhighlight lang="text"><br />
foo/<br />
subdir1<br />
subdir2<br />
</syntaxhighlight><br />
<br />
the binary tree might look like:<br />
<br />
<syntaxhighlight lang="text"><br />
foobin/<br />
subdir1<br />
subdir2<br />
</syntaxhighlight><br />
<br />
CMake also supports this structure by default. Switching between in-source builds and out-of-source builds is simply a matter of changing the binary directory when you run CMake (see 'How to Run CMake?' in Chapter 2). Note that if you have already done an in-source build and want to switch to an out of source build, you should start with a fresh copy of the source tree. If you need to support multiple architectures from one source tree, we highly recommend a directory structure like the following:<br />
<br />
<syntaxhighlight lang="text"><br />
projectfoo/<br />
foo/<br />
subdir1<br />
subdir2<br />
foo-linux/<br />
subdir1<br />
subdir2<br />
foo-osx/<br />
subdir1<br />
subdir2<br />
foo-solaris/<br />
subdir1<br />
subdir2<br />
</syntaxhighlight><br />
<br />
That way, each architecture has its own build directory and will not interfere with any other architecture. Remember that not only are the object files kept in the binary directories, but also any configured files that are typically written to the binary tree. Another tree structure found primarily on UNIX projects is one where the binary files for different architectures are kept in subdirectories of the source tree (see below). CMake does not work well with this structure, so we recommend switching to the separate build tree structure shown above.<br />
<br />
<syntaxhighlight lang="text"><br />
foo/<br />
subdir1/<br />
linux<br />
solaris<br />
hpux<br />
subdir2/<br />
linux<br />
solaris<br />
hpux<br />
</syntaxhighlight><br />
<br />
CMake provides three variables for controlling where binary targets are written. They are the CMAKE_RUNTIME_OUTPUT_DIRECTORY (page 664), CMAKE_LIBRARY_OUTPUT_DIRECTORY (page 660), and CMAKE_ARCHIVE_OUTPUT_DIRECTORY (page 657) variables. These variables are used to initialize properties of libraries and executables to control where they will be written. Setting these enables a project to place all the libraries and executables into a single directory. For projects with many subdirectories, this can be a real time saver. A typical implementation is shown below:<br />
<br />
<syntaxhighlight lang="text"><br />
# Setup output directories.<br />
set (CMAKE_RUNTIME_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/bin)<br />
set (CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/lib)<br />
set (CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${PROJECT_BINARY_DIR}/lib)<br />
</syntaxhighlight><br />
<br />
In this example, all the "runtime" binaries will be written to the bin subdirectory of the project's binary tree, including executable files on all platforms and DLLs on Windows. Other binaries will be written to the lib directory. This approach is very useful for projects that make use of shared libraries (DLLs) because it collects all of the shared libraries into one directory. If the executables are placed in the same directory, then they can find the required shared libraries more easily when run on Windows.<br />
<br />
One final note on directory structures: with CMake, it is perfectly acceptable to have a project within a project. For example, within the Visualization Toolkit's source tree is a directory that contains a complete copy of the zlib compression library. In writing the CMakeLists file for that library, we use the PROJECT command to create a project named VTKZLIB even though it is within the VTK source tree and project. This has no real impact on VTK, but it does allow us to build zlib independent of VTK without having to modify its CMakeLists file.<br />
<br />
<br />
===Useful CMake Commands When Converting Projects===<br />
<br />
There are a few CMake commands that can make the job of converting an existing project easier and faster. The file() (page 287) command with the GLOB argument allows you to quickly set a variable containing a list of all the files that match the glob expression. For example:<br />
<br />
<syntaxhighlight lang="text"><br />
# collect up the source files<br />
file (GLOB SRC_FILES "*.cxx")<br />
<br />
# create the executable<br />
add_executable (foo ${SRC_FILES})<br />
</syntaxhighlight><br />
<br />
will set the SRC_FILES variable to a list of all the .cxx files in the current source directory. It will then create an executable using those source files. Windows developers should be aware that glob matches are case sensitive.<br />
<br />
Two other useful commands are make_directory() (page 349) and exec_program() (page 346). By default, CMake will create all the output directories it needs for the object files, libraries, and executables. With existing projects there may be some part of the build process that creates directories that CMake would not normally create. In these cases, the make_directory command can be used. As soon as CMake executes that command, it will create the directory specified if it does not already exist. The exec_program command will execute a program when it is encountered by CMake. This is useful if you want to quickly convert a UNIX autoconf configured header file to CMake. Instead of doing the full conversion to CMake, you could run configure from an exec_program command to generate the configured header file (on UNIX only, of course).<br />
<br />
<br />
===Converting UNIX Makefiles===<br />
<br />
If your project is currently based on standard UNIX Makefiles (not autoconf and Makefile.in or imake) then their conversion to CMake should be fairly straightforward. Essentially, for every directory in your project that has a Makefile, you will create a matching CMakeLists file. How you handle multiple Makefiles in a directory depends on their function. If the additional Makefiles (or Makefile type files) are simply included in the main Makefile, you can create matching CMake syntax files and include them into your main CMakeLists file in a similar manner. If the different Makefiles are meant to be invoked on the command line for different situations, consider creating a main CMakeLists file that uses some logic to choose which one to include() (page 317) based on a CMake option.<br />
<br />
Converting the Makefile syntax to CMake is fairly easy. Frequently Makefiles have a list of object files to compile. These can be converted to CMake variables as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
OBJS= \<br />
foo1.o \<br />
foo2.o \<br />
foo3.o<br />
</syntaxhighlight><br />
<br />
becomes<br />
<br />
<syntaxhighlight lang="text"><br />
set (SOURCES<br />
foo1.c<br />
foo2.c<br />
foo3.c<br />
)<br />
</syntaxhighlight><br />
<br />
While the object files are typically listed in a Makefile, in CMake the focus is on the source files. If you used conditional statements in your Makefiles, they can be converted over to CMake if commands. Since CMake handles generating dependencies, most dependencies or rules to generate dependencies can be eliminated. Where you have rules to build libraries or executables, replace them with add_library() (page 274) or add_executable() (page 273) commands. Some UNIX build systems (and source code) make heavy use of the system architecture to determine which files to compile or what flags to use. Typically this information is stored in a Makefile variable called ARCH or UNAME.<br />
<br />
The first choice in these cases is to replace the architecture-specific code with a more generic test. For example, instead of switching your handling of byte order based on operating system, make the decision based on the results of a byte order test such as CheckBigEndian.cmake. With some software packages, there is too m uch architecture specific code for such a change to be reasonable, or you may want to make decisions based on architecture for other reasons. In those cases, you can use the variables defined in the CMakeDetermineSystem module. They provide fairly detailed information on the operating system and version of the host computer.<br />
<br />
<br />
===Converting Autoconf Based Projects===<br />
<br />
Autoconf-based projects primarily consist of three key pieces. The first is the configure.in file which drives<br />
the process. The second is Makefile.in which will become the resulting Makefile, and the third piece is the<br />
remaining configured files that result from running configure. In converting an autoconf based project to<br />
CMake, start with the configure.in and Makefile.in files.<br />
<br />
The Makefile.in file can be converted to CMake syntax as explained in the preceding section on converting<br />
UNIX Makefiles. Once this has been done, convert the configure.in file into CMake syntax. Most functions<br />
(macros) in autoconf have corresponding commands in CMake. A short table of some of the basic conversions<br />
is listed below:<br />
<br />
'''AC_ARG_WITH''' Use the option() (page 327) command.<br />
<br />
'''AC_CHECK_HEADER''' Use the CHECK_INCLUDE_FILE macro from the CheckIncludeFile module.<br />
<br />
'''AC_MSG_CHECKING''' Use the message command with the STATUS argument.<br />
<br />
'''AC_SUBST''' Done automatically when using the configure_file() (page 282) command.<br />
<br />
'''AC_CHECK_LIB''' Use the CHECK_LIB RARY_EXISTS macro from the CheckLibraryExists module.<br />
<br />
'''AC_CONFIG_SUBDIRS''' Use the add_subdirectory() (page 277) command.<br />
<br />
'''AC_OUTPUT''' Use the configure_file() (page 282) command.<br />
<br />
'''AC_TRY_COMPILE''' Use the try_compile() (page 343) command.<br />
<br />
If your configure script performs test compilations using AC_TRY_COMPILE , you can use the same code for CMake. Either put it directly into your CMakeLists file if it is short, or preferably put it into a source code file for your project. We typically put such files into a CMake subdirectory for large projects that require such testing.<br />
<br />
Where you are relying on autoconf to configure files, you can use CMake's configure_file command. The basic approach is the same and we typically name input files to be configured with a in extension just as autoconf does. This command replaces any variables in the input file referenced as ${VAR} or @VAR@ with their values as determined by CMake. If a variable is not defined, it will be replaced with nothing. Optionally, only variables of the form @VAR@ will be replaced and ${VAR} will be ignored. This is useful for configuring files for languages that use ${VAR} as a syntax for evaluating variables. You can also conditionally define variables using the C pre processor by using #cmakedefine VAR. If the variable is defined then configure_file will convert the #cmakedefine into a #define; if it is not defined, it will become a commented out #undef. For example:<br />
<br />
<syntaxhighlight lang="text"><br />
/* what byte order is this system */<br />
#cmakedefine CMAKE_WORDS_BIGENDIAN<br />
<br />
/* what size is an INT */<br />
#cmakedefine SIZEOF_INT @SIZEOF_INT@<br />
</syntaxhighlight><br />
<br />
===Converting Windows Based Workspaces===<br />
<br />
To convert a Visual Studio workspace (or solution for Visual Studio .Net) to CMake involves a few steps. First you will need to create a CMakeLists file at the top of your source code directory. This file should start with a project() (page 327) command that defines the name of the project. This will become the name of the resulting workspace (or solution for Visual Studio .Net). Next, add all of your source files into CMake variables. For large projects that have multiple directories, create a CMakeLists file in each directory as described in the section on source directory structures at the beginning of this chapter. You will then add your libraries and executables using add_library() (page 274) and add_executable() (page 273). By default, add_executable assumes that your executable is a console application. Adding the WIN32 argument to add_executable indicates that it is a Windows application (using WinMain instead of main).<br />
<br />
There are a few nice features that Visual Studio supports and CMake can take advantage of. One is support for class browsing. Typically in CMake, only source files are added to a target, not header files. If you add header files to a target, they will show up in the workspace and then you will be able to browse them as usual. Visual Studio also supports the notion of groups of files. By default, CMake creates groups for source files and header files. Using the source_group() (page 334) command, you can create your own groups and assign files to them. If you have any custom build steps in your workspace, these can be added to your CMakeLists files using the add_custom_command() (page 269) command. Custom targets (Utility Targets) in Visual Studio can be added with the add_custom_target() (page 272) command.<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:MastringCmakeVersion31]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=MastringCmakeVersion31:Chapter_06&diff=5604MastringCmakeVersion31:Chapter 062020-09-21T12:01:41Z<p>Onionmixer: CMAKE Chapter 6</p>
<hr />
<div>==CHAPTER SIX::CUSTOM COMMANDS AND TARGETS==<br />
<br />
Frequently the build process for a software project goes beyond simply compiling libraries and executables. In many cases, additional tasks may be required during or after the build process. Common examples include: compiling documentation using a documentation package; generating source files by running another executable; generating files using tools for which CMake doesn't have rules (such as lex and yacc); moving the resulting executables; post processing the executable, etc. CMake supports these additional tasks using both custom commands and targets. This chapter will describe how to use custom commands and targets to perform complex tasks that CMake does not inherently support.<br />
<br />
<br />
===Portable Custom Commands===<br />
<br />
Before going into detail on how to use custom commands, we will discuss how to deal with some of their portability issues. Custom commands typically involve running programs with files as inputs or outputs. Even a simple command, such as copying a file, can be tricky to do in a cross-platform way. For example, copying a file on UNIX is done with the cp command, while on Windows it is done with the copy command. To make matters worse, frequently the names of files will change on different platforms. Executables on Windows end with .exe, while on UNIX they do not. Even between UNIX implementations there are differences, such as which extensions are used for shared libraries; .so, .sl, .dylib, etc.<br />
<br />
CMake provides three main tools for handling these differences. The first is the -E option (short for execute) to cmake. When the cmake executable is passed the -E option, it acts as a general purpose, cross-platform utility command. The arguments following the -E option indicate what cmake should do. Some options include:<br />
<br />
'''chdir dir command args''' Changes the current directory to dir and then executes the command with the provided arguments.<br />
<br />
'''copy file destination''' Copies a file from one directory or filename to another.<br />
<br />
'''copy_if_different''' in-file out-file copy _if_different first checks to see if the files are different before copying them. This is critical in many rules since the build process is based on file modification times. If the copied file is used as the input to another build rule, then copy_if_different can eliminate unnecessary recompilations.<br />
<br />
'''copy_directory source destination''' This option copies the source directory including any subdirectories to the destination directory.<br />
<br />
'''remove file1 file2...''' Removes the listed files from the disk.<br />
<br />
'''echo string''' Echos a string to the console. This is useful for providing output during the build process.<br />
<br />
'''time command args''' Runs the command and times its execution.<br />
<br />
These options provide a platform-independent way to perform a few common tasks. The cma k e executable can be referenced by using the CMAKE_COMMAND (page 625) variable in your CMakeLists files, as later examples will show.<br />
<br />
Of course, CMake doesn't limit you to using cmake -E in all your commands. You can use any command that you like, though it's important to consider portability issues when doing it. A common practice is to use find_program() (page 306) to find an executable (Perl, for example), and then use that executable in your custom commands.<br />
<br />
The second tool that CMake provides to address portability issues is a number of variables describing the characteristics of the platform . The cmake-variables(7) (page 623) manual lists many variables that are useful for custom commands that need to reference files with platform-dependent names. These include CMAKE_EXECUTABLE_SUFFIX (page 627), CMAKE_SHARED_LIBRARY_PREFIX (page 632), etc . which describe file naming conventions.<br />
<br />
Finally, CMake 2.8.4 and later support "generator expressions" in custom commands. These are expressions that use the special syntax $<...>; they are evaluated by the generator of the native build files. Please see the cmake-generator-expressions(7) (page 356) manual for further details. They may appear anywhere in a custom command line. Supported expressions include:<br />
<br />
$<CONFIGURATION> Build configuration name, such as "Debug" or "Release".<br />
<br />
$<TARGET_FILE :tgt> The main file on disk associated with named target "tgt" (.exe, .so. 1.2, .a).<br />
<br />
Generator expressions are not evaluated while processing CMake input files, but are instead delayed until generation of the final build system. Therefore, the values substituted for them know all the details of their evaluation context, including the current build configuration and all build properties associated with a target.<br />
<br />
<br />
===Using add_custom_command on a Target===<br />
<br />
Now we will consider the signature for add_custom_command. In Makefile terminology, add_custom_command adds a rule to a Makefile. For those more familiar with Visual Studio, it adds a custom build step to a file. add_custom_command has two main signatures: one for adding a custom command to a target and one for adding a custom command to build a file. When adding a custom command to a target the signature is as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
add_custom_command (<br />
TARGET target<br />
PRE_BUILD | PRE_LINK | POST_BUILD<br />
COMMAND command [ARGS arg1 arg2 arg3 ...]<br />
[COMMAND commmand [ARGS arg1 arg2 arg3 ...] ...]<br />
[COMMENT comment]<br />
)<br />
</syntaxhighlight><br />
<br />
The target is the name of a CMake target (executable, library, or custom) to which you want to add the custom command. There is a choice of when the custom command should be executed. PRE_BUILD indicates that the command should be executed before any other dependencies for the target are built. PRE_LINK indicates that the command should be executed after all the dependencies are built, but before the actual link command. POST_BUILD indicates that the custom command should be executed after the target has been built. The COMMAND argument is the command (executable) to run, and ARGS provides an optional list of arguments to the command. Finally, the COMMENT argument can be used to provide a quoted string to be used as output when this custom command is run. This is useful if you want to provide some feedback or documentation on what is happening during the build. You can specify as many commands as you want for a custom command. They will be executed in the order specified.<br />
<br />
<br />
====How to Copy an Executable Once it is Built?====<br />
<br />
Now let us consider a simple custom command for copying an executable once it has been built.<br />
<br />
<syntaxhighlight lang="text"><br />
# first define the executable target as usual<br />
add_executable (Foo bar.c)<br />
<br />
# then add the custom command to copy it<br />
add_custom_command (<br />
TARGET Foo<br />
POST_BUILD<br />
COMMAND ${CMAKE_COMMAND}<br />
ARGS -E copy $<TARGET_FILE:Foo> /testing_department/files<br />
)<br />
</syntaxhighlight><br />
<br />
The first command in this example is the standard command for creating an executable from a list of source files. In this cases, an executable named Foo is created from the source file bar.c. Next is the add_custom_command invocation. Here the target is simply Foo and we are adding a post build command. The command to execute is cmake which has its full path specified in the CMAKE_COMMAND variable. Its arguments are -E copy and the source and destination locations. In this case, it will copy the Foo executable from where it was built into the /testing_department/files directory. Note that the TARGET parameter accepts a CMake target (Foo in this example), but arguments specified to the COMMAND parameter normally require full paths. In this case, we pass to cmake -E copy, the full path to the executable referenced via the $<TARGET_FILE:...> generator expression.<br />
<br />
<br />
===Using add_custom_command to Generate a File===<br />
<br />
The second use for add_custom_command() (page 269) is to add a rule for how to build an output file. Here the rule provided will replace any current rules for building that file. The signature is as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
add_custom_command (OUTPUT output1 [output2 ...]<br />
COMMAND command [ARGS [args...]]<br />
[COMMAND command [ARGS arg1 arg2 arg3 ...] ...]<br />
[MAIN_DEPENDENCY depend]<br />
[DEPENDS [depends...]]<br />
[COMMENT comment ]<br />
)<br />
</syntaxhighlight><br />
<br />
The OUTPUT is the file (or files) that will result from running this custom command. The COMMAND and ARGS parameters are the command to execute and the arguments to pass to it. As with the prior signature you can have as many commands as you wish. The DEPENDS are files or executables that depend on this custom command. If any of these dependencies change, this custom command will re-execute. The MAIN_DEPENDENCY is an optional argument that acts as a regular dependency; under Visual Studio, it provides a suggestion for what file to hang this custom command onto. If the MAIN_DEPENDENCY is not specified, CMake will create one automatically. The MAIN_DEPENDENCY should not be a regular .c or .cxx file since the custom command will override the default build rule for the file. Finally, the optional COMMENT is a comment that may be used by some generators to provide additional information during the build process.<br />
<br />
<br />
====Using an Executable to Build a Source File====<br />
<br />
Sometimes a software project builds an executable that is then used for generating source files, which are used to build other executables or libraries. This may sound like an odd case, but it occurs quite frequently. One example is the build process for the TIFF library, which creates an executable that is then run to generate a source file that has system specific information init. This file is then used as a source file in building the main TIFF library. Another example is the Visualization Toolkit (VTK), which builds an executable called vtkWrapTcl that wraps C++ classes into Tel. The executable is built and then used to create more source files for the build process.<br />
<br />
<syntaxhighlight lang="text"><br />
###################################################<br />
# Test using a compiled program to create a file<br />
###################################################<br />
<br />
# add the executable that will create the file<br />
# build creator executable from creator.cxx<br />
add_executable (creator creator.cxx)<br />
<br />
# add the custom command to produce created.c<br />
add_custom_command (<br />
OUTPUT ${PROJECT_BINARY_DIR}/created.c<br />
DEPENDS creator<br />
COMMAND creator ${PROJECT_BINARY_DIR}/created.c<br />
)<br />
<br />
# add an executable that uses created.c<br />
add_executable (Foo ${PROJECT_BINARY_DIR}/created.c)<br />
</syntaxhighlight><br />
<br />
The first part of this example produces the creator executable from the source file creator.cxx. The custom command then sets up a rule for producing the source file created.c by running the executable creator. The custom command depends on the creator target and writes its result into the output tree (PROJECT_BINARY_DIR). Finally, an executable target called Foo is added, which is built using the created.c source file. CMake will create all the required rules in the Makefile (or Visual Studio workspace) so that when you build the project, the creator executable will be built, and run to create created.c, which will then be used to build the Foo executable.<br />
<br />
<br />
===Adding a Custom Target===<br />
<br />
In the discussion so far, CMake targets have generally referred to executables and libraries. CMake supports a more general notion of targets, called custom targets, which can be used whenever you want the notion of a target but without the end product being a library or an executable. Examples of custom targets include targets to build documentation, run tests, or update web pages. To add a custom target, use the ADD_CUSTOM_TARGET command with the following signature:<br />
<br />
<syntaxhighlight lang="text"><br />
ADD_CUSTOM_TARGET ( name [ALL]<br />
[command arg arg arg... ]<br />
[DEPENDS depend depend depend ... ]<br />
)<br />
</syntaxhighlight><br />
<br />
The name specified will be the name given to the target. You can use that name to specifically build that target with Makefiles (make name) or Visual Studio (right-click on the target and then select Build). If the optional ALL argument is specified, this target will be included in the ALL_BUILD target and will automatically be built whenever the Makefile or Project is built. The command and arguments are optional; if specified, they will be added to the target as a post-build command. For custom targets that will only execute a command this is all you will need. More complex custom targets may depend on other files, in these cases the DEPENDS arguments are used to list which files this target depends on. We will consider examples of both cases. First, let us look at a custom target that has no dependencies:<br />
<br />
<syntaxhighlight lang="text"><br />
ADD_CUSTOM_TARGET ( FooJAR ALL<br />
${JAR} -cvf "\"${PROJECT_BINARY_DIR}/Foo.jar\""<br />
"\"${PROJECT_SOURCE_DIR}/Java\""<br />
)<br />
</syntaxhighlight><br />
<br />
<syntaxhighlight lang="text"><br />
# Add the rule to build the .dvi file from the .tex<br />
# file. This relies on LATEX being set correctly<br />
#<br />
add_custom_command (<br />
OUTPUT ${PROJECT_BINARY_DIR}/doc1.dvi<br />
DEPENDS ${PROJECT_SOURCE_DIR}/doc1.tex<br />
COMMAND ${LATEX} ${PROJECT_SOURCE_DIR}/docl.tex<br />
)<br />
<br />
# Add the rule to produce the .pdf file from the .dvi<br />
# file. This relies on DVIPDF being set correctly<br />
#<br />
add_custom_command (<br />
OUTPUT ${PROJECT_BINARY_DIR}/docl1.pdf<br />
DEPENDS ${PROJECT_BINARY_DIR}/doc1.dvi<br />
COMMAND ${DVIPDF} ${PROJECT_BINARY_DIR}/doc1.dvi<br />
)<br />
<br />
# finally add the custom target that when invoked<br />
# will cause the generation of the pdf file<br />
#<br />
ADD_CUSTOM_TARGET ( TDocument ALL<br />
DEPENDS ${PROJECT_BINARY_DIR}/doc1.pdf<br />
)<br />
</syntaxhighlight><br />
<br />
This example makes use of both add_custom_command and ADD_CUSTOM_TARGET. The two add_custom_command invocations are used to specify the rules for producing a .pdf file from a .tex file. In this case, there are two steps and two custom commands. First a .dvi file is produced from the .tex file by running LaTeX, then the .dvi file is processed to produce the desired .pdf file. Finally, a custom target is added called TDocument. Its command simply echoes out what it is doing, while the real work is done by the two custom commands. The DEPENDS argument sets up a dependency between the custom target and the custom commands. When TDocument is built, it will first look to see if all of its dependencies are built. If any are not built, it will invoke the appropriate custom commands to build them. This example can be shortened by combining the two custom commands into one custom command, as shown in the following example:<br />
<br />
<syntaxhighlight lang="text"><br />
# Add the rule to build the .pdf file from the .tex<br />
# file. This relies on LATEX and DVIPDF being set correctly<br />
#<br />
add_custom_command (<br />
OUTPUT ${PROJECT_BINARY_DIR}/doc1.pdf<br />
DEPENDS ${PROJECT_SOURCE_DIR}/doc1.tex<br />
COMMAND ${LATEX} ${PROJECT_SOURCE_DIR}/doc1.tex<br />
COMMAND ${DVIPDF} ${PROJECT_BINARY_DIR}/doc1.dvi<br />
)<br />
<br />
# finally add the custom target that when invoked<br />
# will cause the generation of the pdf file<br />
#<br />
ADD_CUSTOM_TARGET ( TDocument ALL<br />
DEPENDS ${PROJECT_BINARY_DIR}/doc1.pdf<br />
)<br />
</syntaxhighlight><br />
<br />
Now consider a case where the documentation consists of multiple files. The above example can be modified to handle many files by using a list of inputs and a foreach loop. For example<br />
<br />
<syntaxhighlight lang="text"><br />
# set the list of documents to process<br />
set (DOCS doc1 doc2 doc3)<br />
<br />
# add the custom commands for each document<br />
foreach (DOC ${DOCS})<br />
<br />
add_custom_command (<br />
OUTPUT ${PROJECT_BINARY_DIR}/${DOC}.pdf<br />
DEPENDS ${PROJECT_SOURCE_DIR}/${DOC}.tex<br />
COMMAND ${LATEX} ${PROJECT_SOURCE_DIR}/${DOC}.tex<br />
COMMAND ${DVIPDF} ${PROJECT_BINARY_DIR}/${DOC}.dvi<br />
)<br />
<br />
# build a list of all the results<br />
list (APPEND DOC_RESULTS ${PROJECT_BINARY_DIR}/${DOC}.pdf)<br />
<br />
endforeach (DOC)<br />
<br />
# finally add the custom target that when invoked<br />
# will cause the generation of the pdf file<br />
#<br />
ADD_CUSTOM_TARGET ( TDocument ALL<br />
DEPENDS ${DOC_RESULTS}<br />
)<br />
</syntaxhighlight><br />
<br />
In this example, bui lding the custom target TDocument will cause all of the specified .pdf files to be generated. Adding a new document to the list is simply a matter of adding its filename to the DOCS variable at the top of the example.<br />
<br />
<br />
===Specifying Dependencies and Outputs===<br />
<br />
When using custom commands and custom targets you will often be specifying dependencies. When specify a dependency or the output of a custom command, you should always specify the full path. example, if the command produces foo.h in the binary tree then its output should be something ${PROJECT_BINARY_DIR}/foo.h. CMake will try to determine the correct path for the file if not specified; complex projects frequently end up using files in both the source and build trees, this eventually lead to errors if the full paths are not specified.<br />
<br />
When specifying a target as a dependency, you can leave off the full path and executable extension, referencing it simply by its name. Consider the specification of the generator target as an add_custom_command dependency in the example earlier in this chapter. CMake recognizes creator as matching an existing target and properly handles the dependencies.<br />
<br />
<br />
===When There isn't One Rule For One Output===<br />
<br />
There are a couple of unusual cases that can arise when using custom commands that warrant further explanation. The first is a case where one command (or executable) can create multiple outputs, and the second is when multiple commands can be used to create a single output.<br />
<br />
====A Single Command Producing Multiple Outputs====<br />
<br />
In CMake, a custom command can produce multiple outputs simply by listing multiple outputs after the OUTPUT keyword. CMake will create the correct rules for your build system so that no matter which output is required for a target, the right rules will be run. If the executable happens to produce a few outputs but the build process is only using one of them, then you can simply ignore the other outputs when creating your custom command. Say that the executable produces a source file that is used in the build process, and also an execution log that is not used. The custom command should specify the source file as the output and ignore the fact that a log file is also generated.<br />
<br />
Another case of having one command with multiple outputs is when the command is the same but the arguments to it change. This is effectively the same as having a different command, and each case should have its own custom command. An example of this was the documentation example on page 112, where a custom command was added for each .lex file. The command is the same but the arguments passed to it change each time.<br />
<br />
====Having One Output That Can Be Generated By Different Commands====<br />
<br />
In rare cases, you may fi nd that you have more than one command that you can use to generate an output. Most build systems, such as make and Visual Studio, do not support this and likewise CMake does not. There are two common approaches that can be used to resolve this. If you truly have two different commands that produce the same output and no other significant outputs, then you can simply pick one of them and create a custom command for it.<br />
<br />
In more complex cases there are multiple commands with multiple outputs; for example:<br />
<br />
<syntaxhighlight lang="text"><br />
Command1 produces foo.h and bar.h<br />
Command2 produces widget.h and bar.h<br />
</syntaxhighlight><br />
<br />
There are a few approaches that can be used in this case. You might consider combining both commands and all three outputs into a single custom command, so that whenever one output is required, all three are built at the same time. You could also create three custom commands, one for each unique output. The custom command for foo.h would invoke Command I, while the one for widget .h would invoke Command2. When specifying the custom command for bar.h, you could choose either Command1 or Command2.<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:MastringCmakeVersion31]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=MastringCmakeVersion31:Chapter_05&diff=5603MastringCmakeVersion31:Chapter 052020-09-21T12:01:03Z<p>Onionmixer: CMAKE Chapter 5</p>
<hr />
<div>==CHAPTER FIVE::SYSTEM INSPECTION==<br />
<br />
This chapter will describe how to use CMake to inspect the environment of the system where the software is being built. This is a critical factor in creating cross-platform applications or libraries. It covers how to find and use system and user installed header files and libraries. It also covers some of the more advanced features of CMake, including the try_compile() (page 343) and try_run() (page 344) commands. These commands are extremely powerful tools for determining the capabilities of the system and compiler that is hosting your software.<br />
<br />
This chapter also describes how to generate configured files and how to cross compile with CMake. Finally, the steps required to enable a project for the find_package() (page 297) command are covered, explaining how to create a <Package>Config.cmake file and other required files.<br />
<br />
===Using Header Files and Libraries===<br />
<br />
Many C and C++ programs depend on external libraries; however, when it comes to the practical aspects of compiling and linking a project, taking advantage of existing libraries can be difficult for both developers and users. Problems typically show up as soon as the software is built on a system other than the one on which it was developed. Assumptions regarding where libraries and header files are located become obvious when they are not installed in the same place on the new computer and the build system is unable to find them. CMake has many features to aid developers in the integration of external software libraries into a project.<br />
<br />
The CMake commands that are most relevant to this type of integration are the find_file() (page 292), find_library() (page 294), find_path() (page 303), find_program() (page 306), and find_package() (page 297) commands. For most C and C++ libraries, a combination of find_library and find_path will be enough to compile and link with an installed library. The command find_library can be used to locate, or allow a user to locate a library, and find_path can be used to find the path to a representative include file from the project. For example, if you wanted to link to the tiff library, you could use the following commands in your CMakeLists.txt file<br />
<br />
<syntaxhighlight lang="text"><br />
# find libtiff, looking in some standard places<br />
find_library (TIFF_LIBRARY<br />
NAMES tiff tiff2<br />
PATHS /usr/local/lib /usr/lib<br />
)<br />
<br />
# find tiff.h looking in some standard places<br />
find_path (TIFF_INCLUDES tiff.h<br />
/usr/local/include<br />
/usr/include<br />
)<br />
<br />
include_directories (${TIFF_INCLUDES})<br />
<br />
add_executable (mytiff mytiff.c)<br />
<br />
target_link_libraries (myprogram ${TIFF_LIBRARY})<br />
</syntaxhighlight><br />
<br />
The first command used is find_library, which in this case, will look for a library with the name tiff or tiff2. The find_library command only requires the base name of the library without any platform specific prefixes or suffixes, such as lib and .dll. The appropriate prefixes and suffixes for the system running CMake will be added to the library name automatically when CMake attempts to find it. All the FIND_* commands will look in the PATH environment variable. In addition, the commands allow the specification of additional search paths as arguments to be listed after the PATHS marker argument. In addition to supporting standard paths, windows registry entries and environment variables can be used to construct search paths. The syntax for registry entries is the following:<br />
<br />
<syntaxhighlight lang="text"><br />
[HKEY_CURRENT_USER\\Software\\Kitware\\Path;Build1]<br />
</syntaxhighlight><br />
<br />
Since software can be installed in many different places, it is impossible for CMake to find the library every time, but most standard installations should be covered. The find_* commands automatically create a cache variable so that users can override or specify the location from the CMake GUI. This way, if CMake is unable to locate the files it is looking for, users will still have an opportunity to specify them. If CMake does not find a file, the value is set to VAR-NOTFOUND; this value tells CMake that it should continue looking each time CMake's configure step is run. Note that in if statements, values of VAR-NOTFOUND will evaluate as false.<br />
<br />
The next command used is find_path, a general purpose command that, in this example, is used to locate a header file from the library. Header files and libraries are often installed in different locations, and both locations are required to compile and link programs that use them. The use of find_path is similar to find_library, although it only supports one name, a list of search paths.<br />
<br />
The next part of the CMakeLists file uses the variables created by the find_* commands. The variables can be used without checking for val id values, as CMake will print an error message notifying the user if any of the required variables have not been set. The user can then set the cache values and reconfigure until the message goes away. Optionally, a CMakeLists file could use the if command to use alternative libraries or options to build the project without the library if it cannot be found.<br />
<br />
From the above example you can see how using the find_* commands can help your software compile on a variety of systems. It is worth noting that the find_* commands search for a match starting with the first argument and first path, so when listing paths and library names, list your preferred paths and names first. If there are multiple versions of a library and you would prefer tiff over tiff2, make sure they are listed in that order.<br />
<br />
<br />
===System Properties===<br />
<br />
Although it is a common practice in C and C++ code to add platform-specific code inside preprocessor ifdef directives, for maximum portability this should be avoided. Software should not be tuned to specific platforms with ifdefs, but rather to a canonical system consisting of a set of features. Coding to specific systems makes the software less portable, because systems and the features they support change with time, and even from system to system. A feature that may not have worked on a platform in the past may be a required feature for the platform in the future. The following code fragments illustrate the difference between coding to a canonical system and a specific system:<br />
<br />
<syntaxhighlight lang="text"><br />
// coding to a feature<br />
#ifdef HAS FOOBAR_CALL<br />
foobar ();<br />
#else<br />
myfoobar ();<br />
#tendif<br />
<br />
// coding to specific platforms<br />
#if defined(SUN) && defined(HPUX) && !defined(GNUC)<br />
foobar ();<br />
#else<br />
myfoobar ();<br />
#endif<br />
</syntaxhighlight><br />
<br />
The problem with the second approach is that the code will have to be modified for each new platform on which the software is compiled. For example, a future version of SUN may no longer have the foobar call. Using the HAS_FOOBAR_CALL approach, the software will work as long as HAS_FOOBAR_CALL is defined correctly, and this is where CMake can help. CMake can be used to define HAS_FOOBAR_CALL correctly and automatically by making use of the try_compile() (page 343) and try_run() (page 344) commands. These commands can be used to compile and run small test programs during the CMake configure step. The test programs will be sent to the compiler that will be used to build the project, and if errors occur, the feature can be disabled. These commands require that you write a small C or C++ program to test the feature. For example, to test if the foobar call is provided on the system, try compiling a simple program that uses foobar. First write the simple test program (testNeedFoobar.c in this example) and then add the CMake calls to the CMakeLists file to try compiling that code. If the compilation works then HAS_FOOBAR_CALL will be set to true.<br />
<br />
<syntaxhighlight lang="text"><br />
----testNeedFoobar.c----<br />
#include <foobar.h><br />
main ()<br />
{<br />
foobar ();<br />
}<br />
<br />
----testNeedFoobar.cmake----<br />
try_compile (HAS_FOOBAR_CALL<br />
${CMAKE_BINARY_DIR}<br />
${PROJECT_SOURCE_DIR}/testNeedFoobar.c<br />
}<br />
</syntaxhighlight><br />
<br />
Now that HAS_FOOBAR_CALL is set correctly in CMake, you can use it in your source code through either the add_definitions() (page 273) command or by configuring a header file. We recommend configuring a header file as that file can be used by other projects that depend on your library. This is discussed further in the section called 'How To Configure a Header File'.<br />
<br />
Sometimes compiling a test program is not enough. I n some cases, you may actually want to compile and run a program to get its output. A good example of this is testing the byte order of a machine. The following example shows how to write a small program that CMake will compile and run to determine the byte order of a machine.<br />
<br />
<syntaxhighlight lang="text"><br />
----TestByteOrder.c----<br />
int main () {<br />
/* Are we most significant byte first or last */<br />
union<br />
{<br />
long 1;<br />
char c[sizeof (long)];<br />
} u;<br />
u.l = 1;<br />
exit (u.c[{sizeof (long) - 1] == 1);<br />
}<br />
<br />
<br />
----TestByteOrder.cmake----<br />
try_run (RUN_RESULT_VAR<br />
COMPILE_RESULT_VAR<br />
${CMAKE_BINARY_DIR}<br />
${PROJECT_SOURCE_DIR}/Modules/TestByteOrder.c<br />
OUTPUT_VARIABLE OUTPUT<br />
)<br />
</syntaxhighlight><br />
<br />
The return result of the run will go into RUN_RESULT_VAR, the result of the compile will go into COMPILE_RESULT_VAR, and any output from the run will go into OUTPUT. You can use these variables to report debug information to the users of your project.<br />
<br />
For small test programs the file() (page 287) command with the WRITE option can be used to create the source file from the CMakeLists file. The following example tests the C compiler to verify that it can be run.<br />
<br />
<syntaxhighlight lang="text"><br />
file (WRITE<br />
${CMAKE_BINARY_DIR}/CMakeTmp/testCCompiler.c<br />
"int main() {return 0;}"<br />
)<br />
<br />
try_compile (CMAKE_C_COMPILER_WORKS<br />
${CMAKE_BINARY_DIR}<br />
${CMAKE_BINARY_DIR}/CMakeTmp/testCCompiler.c<br />
OUTPUT_VARIABLE OUTPUT<br />
)<br />
</syntaxhighlight><br />
<br />
There are several predefined try-run and try-compile macros in the CMake/Modules directory, some of which are listed below. These macros allow some common checks to be performed without having to create a source file for each test. For detailed documentation or to see how these m acros work, look at the implementation files for them in the CMake/Modules directory of your installation. Many of these macros will look at the current value of the CMAKE_REQUIRED_FLAGS and CMAKE_REQUIRED_LIBRARIES variables to add additional compile flags or link libraries to the test.<br />
<br />
<br />
'''CheckFunctionExists.cmake''' This macro checks to see if a C function is on a system by taking two arguments with the first being the name of the function to check for and the second being the variable to store the result into. This macro uses CMAKE_REQUIRED_FLAGS and CMAKE_REQUIRED_LIBRARIES if they are set.<br />
<br />
'''ChecklncludeFile.cmake''' This macro checks for an include file on a system by taking two arguments with first being the include file to look for and the second being the variable to store the result into. Additional CFlags can be passed in as a third argument or by setting CMAKE_REQUIRED_FLAGS .<br />
<br />
'''ChecklncludeFileCXX.cmake''' This macro checks for an include file in a C++ program by taking two arguments with the first being the include file to look for and the second being the variable to store the result into. Additional CFlags can be passed in as a third argument.<br />
<br />
'''ChecklncludeFiles.cmake''' This macros checks for a group of include files by taking two arguments with the first being the include files to look for and the second being the variable to store the result into. This macro uses CMAKE_REQUIRED_FLAGS if it is set, and is useful when a header file you are interested in checking for is dependent on including another header file first.<br />
<br />
'''CheckLibraryExists.cmake''' This macro checks to see if a library exists by taking four arguments with the first being the name of the library to check for; the second being the name of a function that should be in that library; the third argument being the location of where the library should be found; and the fourth argument being a variable to store the result into. This macro uses CMAKE_REQUIRED_FLAGS and CMAKE_REQUIRED_LIBRARIES if they are set.<br />
<br />
'''CheckSymbolExists.cmake''' This macro checks to see if a symbol is defined in a header file by taking three arguments with the first being the symbol to look for; the second argument being a list of header files to try including; and the third argument being where the result is stored. This macro uses CMAKE_REQUIRED_FLAGS and CMAKE_REQUIRED_LIBRARIES if they are set.<br />
<br />
'''CheckTypeSize.cmake''' This macro determines the size in bytes of a variable type by taking two arguments with the first argument being the type to evaluate, and the second argument being where the result is stored. Both CMAKE_REQUIRED_FLAGS and CMAKE_REQUIRED_LIBRARIES are used if they are set.<br />
<br />
'''CheckVariableExists.cmake''' This macro checks to see if a global variable exists by taking two arguments with the first being the variable to look for, and the second argument being the variable to store the result in. This macro will prototype the named variable and then try to use it. If the test program compiles then the variable exists. This will only work for C variables. This macro uses CMAKE_REQUIRED_FLAGS and CMAKE_REQUIRED_LIBRARIES if they are set.<br />
<br />
Consider the following example which shows a variety of these modules being used to compute properties of the platform. At the beginning of the example four modules are loaded from CMake. The remainder of the example uses the macros defined in those modules to test for header files, libraries, symbols, and type sizes respectively.<br />
<br />
<syntaxhighlight lang="text"><br />
# Include all the necessary files for macros<br />
include (CheckIncludeFiles)<br />
include (CheckLibraryExists)<br />
include (CheckSymbolExists)<br />
include (CheckTypeSize)<br />
<br />
# Check for header files<br />
set (INCLUDES "")<br />
CHECK_INCLUDE_FILES ("${INCLUDES};winsock.h" HAVE_WINSOCK_H)<br />
<br />
if (HAVE_WINSOCK_H)<br />
set (INCLUDES ${INCLUDES} winsock.h)<br />
endif (HAVE_WINSOCK_H)<br />
<br />
CHECK_INCLUDE_FILES ("$(INCLUDES};io.h" HAVE_IO_H)<br />
if (HAVE_IO_H)<br />
set (INCLUDES ${INCLUDES} io.h)<br />
endif (HAVE_IO_H)<br />
<br />
# Check for all needed libraries<br />
set (LIBS "")<br />
CHECK_LIBRARY_EXISTS ("dl;${LIBS}" dlopen "" HAVE_LIBDL)<br />
if (HAVE_LIBDL)<br />
set (LIBS ${LIBS} dl)<br />
endif (HAVE_LIBDL)<br />
<br />
CHECK_LIBRARY_EXISTS ("ucb;${LIBS}" gethostname "* HAVE_LIBUCB)<br />
if (HAVE_LIBUCB)})<br />
set (LIBS ${LIBS} ucb)<br />
endif (HAVE_LIBUCB)<br />
<br />
# Add the libraries we found to the libraries to use when<br />
# looking for symbols with the CHECK_SYMBOL_EXISTS macro<br />
set (CMAKE_REQUIRED_LIBRARIES ${LIBS})<br />
<br />
# Check for some functions that are used<br />
CHECK_SYMBOL_EXISTS (socket "${INCLUDES}" HAVE_SOCKET)<br />
CHECK_SYMBOL_EXISTS (poll "$( INCLUDES)" HAVE_POLL)<br />
<br />
# Various type sizes<br />
CHECK_TYPE_SIZE (int SIZEOF_INT)<br />
CHECK_TYPE_SIZE (size_t SIZEOF_SIZE_T)<br />
</syntaxhighlight><br />
<br />
For more advanced try_compile and try_run operations, it may be desirable to pass flags to the compiler or to CMake. Both commands support the optional arguments CMAKE_FLAGS and COMPILE_DEFINITIONS. CMAKE_FLAGS can be used to pass -DVAR:TYPE=VALUE flags to CMake. The value of COMPILE_DEFINITIONS is passed directly to the compiler command line.<br />
<br />
<br />
===Finding Packages===<br />
<br />
Many software projects provide tools and l ibraries that are meant as building blocks for other projects and applications. CMake projects that depend on outside packages locate their dependencies using the find_package()(page 297) command. A typical invocation is of the form:<br />
<br />
<syntaxhighlight lang="text"><br />
find_package(<Package> [version])<br />
</syntaxhighlight><br />
<br />
where <Package> is the name of the package to be found, and [version] is an optional version request (of the form major[.minor.[patch]]). The command 's notion of a package is distinct from that of CPack, which i s meant for creating source and binary distributions and installers.<br />
<br />
The command operates in two modes: Module mode and Config mode. In Module mode, the command searches for a find-module: a file named Find<Package>.cmake. It looks first in the CMAKE_MODULE_PATH(page 646) and then in the CMake install ation. If a find-module is found, it is loaded to search for individual components of the package. Find-modules contain package-specific knowledge of the libraries and other files they expect to find, and internally use commands like find_library() (page 294) to locate them. CMake provides find-modules for many common packages; see the cmake-modules(7) (page 366) manual. Find-modules are tedious and difficult to write and maintain because they need very specific knowledge of every version of the package to be found.<br />
<br />
The Config mode of find_package() (page 297) provides a powerful alternative through cooperation with the package to be found. It enters this mode after failing to locate a find-module or when explicitly requested by the caller. In Config mode the command searches for a package configurationfile: a file named <Package>Config.cmake or <package>-config.cmake which is provided by the package to be found. Given the name of a package, the find_package command knows how to search deep inside install ation prefixes for locations like:<br />
<br />
<syntaxhighlight lang="text"><br />
<prefix>/lib/<package>/<package>-config.cmake<br />
</syntaxhighlight><br />
<br />
(see documentation of the find_package command for a complete list of locations). CMake creates a cache entry called <Package>_DIR to store the location found or allow the user to set it. Since a package configuration file comes with an installation of its package, it knows exactly where to find everything provided by the installation. Once the find_package command locates the file it provides the locations of package components without any additional searching.<br />
<br />
The [version] option asks find_package to locate a particular version of the package. In Module mode, the command passes the request on to the find-module. In Config mode the command looks next to each candidate package configuration file for a package version file: a file named <Package>ConfigVersion.cmake or <package>-config-<version>.cmake. The version file is loaded to test whether the package version is an acceptable match for the version requested (see documentation of find_package for the version file API specification). If the version file claims compatibility, the configuration file is accepted, or is otherwise ignored. This approach allows each project to define its own rules for version compatibility.<br />
<br />
<br />
===Built-in Find Modules===<br />
<br />
CMake h as many predefined modules that can be found in the Modules subdirectory of CMake. The modules can find many common software packages. See the cmake-modules(7) (page 366) manual for a detailed list.<br />
<br />
Each Find<XX>.cmake module defines a set of variables that will allow a project to use the software package once it is found. Those variables all start with the name of the software being found <XX>. With CMake we have tried to establish a convention for naming these variables, but you should read the comments at the top of the module for a more definitive answer. The following variables are used by convention when needed:<br />
<br />
<br />
'''<XX>_INCLUDE_DIRS''' Where to find the package's header files, typically <XX>.h, etc.<br />
<br />
'''<XX>_LIBRARIES''' The libraries to link against to use <XX>. These include full paths.<br />
<br />
'''<XX>_DEFINITIONS''' Preprocessor definitions to use when compiling code that uses <XX>.<br />
<br />
'''<XX>_EXECUTABLE''' Where to find the <XX> tool that is part of the package.<br />
<br />
'''<XX>_<YY>_EXECUTABLE''' Where to find the <YY> tool that comes with <XX>.<br />
<br />
'''<XX>_ROOT_DIR''' Where to find the base directory of the installation of <XX>. This is useful for large packages where you want to reference many files relative to a common base (or root) directory.<br />
<br />
'''<XX>_VERSION_<YY>''' Version <YY> of the package was found if true. Authors of find modules should make sure at most one of these is ever true. For example TCL_VERSION_84.<br />
<br />
'''<XX>_<YY>_FOUND''' If false, then the optional <YY> part of <XX> package is unavailable.<br />
<br />
'''<XX>_FOUND''' Set to false or undefined if we haven't found or don't want to use <XX>.<br />
<br />
Not all of the variables are present in each of the FindXX.cmake files. However, the <XX>_FOUND should exist under most circumstances. If <XX> is a library, then <XX>_LIBRARIES should also be defined, and <XX>_INCLUDE_DIR should usually be defined.<br />
<br />
Modules can be included in a project either with the include command or the find_package() (page 297) command.<br />
<br />
<syntaxhighlight lang="text"><br />
find_package(OpenGL)<br />
</syntaxhighlight><br />
<br />
is equivalent to:<br />
<br />
<syntaxhighlight lang="text"><br />
include($(CMAKE_ROOT)/Modules/FindOpenGL.cmake)<br />
</syntaxhighlight><br />
<br />
and<br />
<br />
<syntaxhighlight lang="text"><br />
include(FindOpenGL)<br />
</syntaxhighlight><br />
<br />
If the project converts over to CMake for its build system, the find_package will still work if the package provides a <XX>Config.cmake file. How to create a CMake package is described later in this chapter.<br />
<br />
<br />
===How to Pass Parameters to a Compilation===<br />
<br />
Once you have determined the features of the system in which you are interested, it is time to configure the software based on what has been found. There are two common ways to pass this information to the compiler: on the compile line, or using a preconfigured header. The first way is to pass definitions on the compile line. A preprocessor definition can be passed to the compiler from a CMakeLists file with the add_definitions() (page 273) command. For example, a common practice in C code is to have the ability to selectively compile in/out debug statements.<br />
<br />
<syntaxhighlight lang="text"><br />
#ifdef DEBUG_BUILD<br />
printf("the value of v is %d", v);<br />
#fendif<br />
</syntaxhighlight><br />
<br />
A CMake variable could be used to turn on or off debug builds using the option() (page 327) command:<br />
<br />
<syntaxhighlight lang="text"><br />
option (DEBUG_BUILD<br />
"Build with extra debug print messages.")<br />
<br />
if (DEBUG_BUILD)<br />
add_definitions (-DDEBUG_BUILD)<br />
endif (DEBUG_BUILD)<br />
</syntaxhighlight><br />
<br />
Another example would be to tell the compiler the result of the previous HAS_FOOBAR_CALL test that was discussed earlier in this chapter. You could do this with the following:<br />
<br />
<syntaxhighlight lang="text"><br />
if (HAS_FOOBAR_CALL)<br />
add_definitions (-DHAS_FOOBAR_CALL)<br />
endif (HAS_FOOBAR_CALL)<br />
</syntaxhighlight><br />
<br />
If you want to pass preprocessor definitions at a finer level of granularity, you can use the COMPILE_DEFINITIONS property that is defined for directories, targets, and source files. For example, the code<br />
<br />
<syntaxhighlight lang="text"><br />
add_library (mylib src1.c src2.c)<br />
add_executable (myexe main1<br />
.c)<br />
set_property (<br />
DIRECTORY<br />
PROPERTY COMPILE_DEFINITIONS A AV=1<br />
)<br />
set_property (<br />
TARGET mylib<br />
PROPERTY COMPILE DEFINITIONS B BV=2<br />
)<br />
set_property (<br />
SOURCE src1.c<br />
PROPERTY COMPILE DEFINITIONS C CV=3<br />
)<br />
</syntaxhighlight><br />
<br />
will build the source files with these definitions:<br />
<br />
<syntaxhighlight lang="text"><br />
</syntaxhighlight><br />
<br />
When the add_definitions command is called with flags like -DX, the definitions are extracted and added to the current directory's COMPILE_DEFINITIONS (page 568) property. When a new subdirectory is created with add_subdirectory() (page 277), the current state of the directory-level property is used to initial ize the same property in the subdirectory.<br />
<br />
Note in the above example that the set_property command will actually set the property and replace any existing value. The command provides the APPEND option to add more definitions without removing existing ones. For example, the code<br />
<br />
<syntaxhighlight lang="text"><br />
src1.c: -DA -DAV=1 -DB -DBV=2 -DC -DCV=3<br />
src2.c: -DA -DAV=1 -DB -DBV=2<br />
main2.c: -DA -DAV=1<br />
</syntaxhighlight><br />
<br />
will add the definitions -DD -DDV=4 when building src1.c. Definitions may also be added on a perconfiguration basis using the COMPILE_DEFINITIONS_<CONFIG> property. For example, the code<br />
<br />
<syntaxhighlight lang="text"><br />
set_property (<br />
SOURCE src1.c<br />
APPEND PROPERTY COMPILE_DEFINITIONS D DV=4<br />
)<br />
</syntaxhighlight><br />
<br />
will build sources in mylib with -DMYLIB_DEBUG_MODE only when compiling in a Debug configuration.<br />
<br />
The second approach for passing definitions to the source code is to configure a header file. For maximum portability of a toolkit, it is recommended that -D options are not required for the compiler command line. Instead of command line options, CMake can be used to configure a header file that applications can include. The header file will include all of the #define macros needed to build the project. The problem with using compile line definitions can be seen when building an application that uses a library. If building the library correctly relies on compile line definitions, then chances are that an application that uses the library will also require the exact same set of compile line definitions; this puts a large burden on the application writer to make sure they add the correct flags to match the library. If instead the library's build process configures a header file with all of the required definitions, any application that uses the library will automatically get the correct definitions when that header file is included. A definition can often change the size of a structure or class, and if the macros are not exactly the same during the build process of the library and the application linking to the library, the application may reference the "wrong part" of a class or struct and crash unexpectedly.<br />
<br />
<br />
===How to Configure a Header File===<br />
<br />
Configured header files are the right choice for most software projects. To configure a file with CMake, the configure_file command is used. This command requires an input file that is parsed by CMake to produce an output file with all variables expanded or replaced. There are three ways to specify a variable in an input file for configure_file.<br />
<br />
<syntaxhighlight lang="text"><br />
#cmakedefine VARIABLE<br />
</syntaxhighlight><br />
<br />
If VARIABLE is true, then the result will be:<br />
<br />
<syntaxhighlight lang="text"><br />
#define VARIABLE<br />
</syntaxhighlight><br />
<br />
If VARIABLE is false, then the result will be:<br />
<br />
<syntaxhighlight lang="text"><br />
/* #undef VARIABLE */<br />
</syntaxhighlight><br />
<br />
'''${VARIABLE}''' This i s simply replaced by the value of VARIABLE.<br />
<br />
'''@VARIABLE@''' This is simply replaced by the value of VARIABLE.<br />
<br />
Since the ${} syntax is commonly used by other languages, users can tell the configure_file command to only expand variables using the @var@ syntax by passing the @ONLY option to the command; this is useful if you are configuring a script that may contain ${var} strings that you want to preserve. This is important because CMake will replace all occurrences of ${var} with the empty string if var is not defined in CMake.<br />
<br />
The following example configures a .h file for a project that contains preprocessor variables. The first definition indicates if the FOOBAR call exists in the library, and the next one contains the path to the build tree.<br />
<br />
<syntaxhighlight lang="text"><br />
----CMakeLists.txt file-----<br />
<br />
# Configure a file from the source tree<br />
# called projectConfigure.h.in and put<br />
# the resulting configured file in the build<br />
# tree and call it projectConfigure.h<br />
<br />
configure_file (<br />
${PROJECT_SOURCE_DIR}/projectConfigure.h.in<br />
${PROJECT_BINARY_DIR}/projectConfigure.h)<br />
</syntaxhighlight><br />
<br />
<syntaxhighlight lang="text"><br />
-----projectConfigure.h.in file------<br />
/* define a variable to tell the code if the */<br />
/* foobar call is available on this system */<br />
#cmakedefine HAS_FOOBAR_CALL<br />
<br />
/* define a variable with the path to the */<br />
/* build directory */<br />
#define PROJECT_BINARY_DIR "${PROJECT_BINARY_DIR}"<br />
</syntaxhighlight><br />
<br />
It is important to configure files into the binary tree, not the source tree. A single source tree may be shared by multiple build trees or platforms. By configuring files into the binary tree the differences between builds or platforms will be kept isolated in the build tree and will not corrupt other builds. This means that you will need to include the directory of the build tree where you configured the header file into the project's list of include directories using the include_directories command.<br />
<br />
<br />
===Creating CMake Package Configuration Files===<br />
<br />
Projects must provide package configuration files so that outside applications can find them. Consider a simple project "Gromit" providing an executable to generate source code and a library against which the generated code must link. The CMakeLists.txt file might start with:<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required (VERSION 2.6.3)<br />
project (Gromit C)<br />
set (version 1.0)<br />
<br />
# Create library and executable.<br />
add_library (gromit STATIC gromit.c gromit.h)<br />
add_executable (gromit-gen gromit-gen.c)<br />
</syntaxhighlight><br />
<br />
In order to install Gromit and export its targets for use by outside projects, add the code:<br />
<br />
<syntaxhighlight lang="text"><br />
# Install and export the targets.<br />
install (FILES gromit.h DESTINATION include/gromit-${version})<br />
install (TARGETS gromit gromit-gen<br />
DESTINATION lib/gromit-${version}<br />
EXPORT gromit-targets)<br />
install (EXPORT gromit-targets<br />
DESTINATION lib/gromit-${version})<br />
</syntaxhighlight><br />
<br />
as described in Section 4.11. Finally, Gromit must provide a package configuration file in its installation tree so that outside projects can locate it with find_package:<br />
<br />
<syntaxhighlight lang="text"><br />
# Create and install package configuration and version files.<br />
configure_file (<br />
${Gromit_SOURCE_DIR}/pkg/gromit-config.cmake.in<br />
${Gromit_BINARY_DIR}/pkg/gromit-config.cmake @ONLY)<br />
<br />
configure_file (<br />
${Gromit_SOURCE_DIR} /gromit-config-version,cmake.in<br />
${Gromit_BINARY_DIR}/gromit-config~version.cmake @ONLY)<br />
<br />
install (FILES ${Gromit_BINARY_DIR}/pkg/gromit-config.cmake<br />
${Gromit_BINARY_DIR}/gromit-config-version.cmake<br />
DESTINATION lib/gromit-${version})<br />
</syntaxhighlight><br />
<br />
This code configures and installs the package configuration file and a corresponding package version file. The package configuration input file gromit-config.cmake. in has the code:<br />
<br />
<syntaxhighlight lang="text"><br />
# Compute installation prefix relative to this file.<br />
get_filename_component (_dir "${CMAKE_CURRENT_LIST_FILE}" PATH)<br />
get_filename_component (_prefix "${_dir}/../.." ABSOLUTE)<br />
<br />
# Import the targets.<br />
include ("${_prefix}/lib/gromit—@version@/gromit-targets.cmake")<br />
<br />
# Report other information.<br />
set (gromit_INCLUDE_DIRS "${_prefix}/include/gromit-@version@")<br />
</syntaxhighlight><br />
<br />
After installation, the configured package configuration file gromit-config. cmake knows the locations of other installed files relative to itself. The corresponding package version file is configured from its input file gromit-config-version.cmake.in, which contains code such as:<br />
<br />
<syntaxhighlight lang="text"><br />
set (PACKAGE_VERSION "@version@")<br />
if (NOT "${PACKAGE_FIND_VERSION}" VERSION_GREATER "@version@")<br />
set (PACKAGE_VERSION_COMPATIBLE 1) # compatible with older<br />
if ("${PACKAGE_FIND_VERSION}" VERSION_EQUAL "@version@")<br />
set (PACKAGE_VERSION_EXACT 1) # exact match for this version<br />
endif ()<br />
endif ()<br />
</syntaxhighlight><br />
<br />
An application that uses the Gromit package might create a CMake file that looks like this:<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required (VERSION 2.6.3)<br />
project (MyProject C)<br />
<br />
find_package (gromit 1.0 REQUIRED)<br />
include_directories (${gromit_INCLUDE_DIRS})<br />
# run imported executable<br />
add_custom_command (OUTPUT generated.c<br />
COMMAND gromit-gen generated.c)<br />
add_executable (myexe generated.c)<br />
target_link_libraries (myexe gromit) # link to imported library<br />
</syntaxhighlight><br />
<br />
The call to find_package locates an installation of Gromit or terminates with an error message if none can be found (due to REQUIRED). After the command succeeds, the Gromit package configuration file gromit-config.cmake has been loaded, so Gromit targets have been imported and variables like gromit_INCLUDE_DIRS have been defined.<br />
<br />
The above example creates a package configuration file and places it in the install tree. One may also create a package configuration file in the build tree to allow applications to use the project without installation. In order to do this, one extends Gromit's CMake file with the code:<br />
<br />
<syntaxhighlight lang="text"><br />
# Make project usable from build tree.<br />
export (TARGETS gromit gromit-gen FILE gromit-targets.cmake)<br />
configure_file (${Gromit_SOURCE_DIR}/gromit-config.cmake.in<br />
${Gromit_BINARY_DIR}/gromit-config.cmake @ONLY)<br />
</syntaxhighlight><br />
<br />
This configure_file call uses a different input file, gromit-config.cmake. in, containing:<br />
<br />
<syntaxhighlight lang="text"><br />
# Import the targets.<br />
include ("@Gromit_BINARY_DIR@/gromit-targets.cmake")<br />
<br />
# Report other information.<br />
set (gromit_INCLUDE_DIRS "@Gromit_SOURCE_DIR@")<br />
</syntaxhighlight><br />
<br />
The package configuration file gromit-config.cmake placed in the build tree provides the same information to an outside project as that in the install tree, but refers to files in the source and build trees. It shares an identical package version file gromit-config-version.cmake which is placed in the install tree.<br />
<br />
<br />
===CMake Package Registry===<br />
<br />
CMake 2.8.5 and later provide two central locations to register packages that have been built or installed anywhere on a system: a User Package Registry and a System Package Registry. The find_package command searches the two package registries as two of the search steps specified in its documentation. The registries are especially useful for helping projects find packages in non-standard install locations or directly in the package build trees. A project may populate either the user or system registry (using its own means) to refer to its location. In either case, the package should store a package configuration file at the registered location and optionally a package version file as discussed in the Finding Packages section.<br />
<br />
The User Package Registry is stored in a platform-specific, per-user location. On Windows it is stored in the<br />
Windows registry under a key in HKEY_CURRENT_USER. A <package> may appear under registry key<br />
<br />
<syntaxhighlight lang="text"><br />
HKEY_CURRENT_USER\Software\Kitware\CMake\Packages\<package><br />
</syntaxhighlight><br />
<br />
as a REG_SZ value with arbitrary name that specifies the directory containing the package configuration file. On UNIX platforms, the user package registry is stored in the user home directory under ~/.cmake/packages. A <package> may appear under the directory<br />
<br />
<syntaxhighlight lang="text"><br />
~/.cmake/packages/<package><br />
</syntaxhighlight><br />
<br />
as a file with arbitrary name whose content specifies the directory containing the package configuration file. The export (PACKAGE) command may be used to register a project build tree in the user package registry. CMake does not currently provide an interface to add install trees to the user package registry; installers must be manually taught to regi ster their packages if desired.<br />
<br />
The System Package Registry is stored in a platform-specific, system-wide location. On Windows it is stored in the Windows registry under a key in HKEY_LOCAL_MACHINE. A <package> may appear under registry key<br />
<br />
<syntaxhighlight lang="text"><br />
HKEY_LOCAL_MACHINE\Software\Kitware\CMake\Packages\<package><br />
</syntaxhighlight> <br />
<br />
as a REG_SZ value with arbitrary name that specifies the directory containing the package configuration file. There is no system package registry on non-Windows platforms. CMake does not provide an interface to add to the system package registry; installers must be manually taught to register their packages if desired.<br />
<br />
Package registry entries are individually owned by the project installations that they reference. A package installer is responsible for adding its own entry and the corresponding uninstaller is responsible for removing it. However, in order to keep the registries clean, the find_package command automatically removes stale package registry entries it encounters if it has sufficient permissions. An entry is considered stale if it refers to a directory that does not exist or does not contain a matching package configuration file. This is particularly useful for user package registry entries created by the export(PACKAGE) command for build trees which have no uninstall event and are simply deleted by developers.<br />
<br />
Package registry entries may have arbitrary name. A simple convention for naming them is to use content hashes, as they are deterministic and unlikely to collide. The export(PACKAGE) command uses this approach. The name of an entry referencing a specific directory is simply the content hash of the directory path itself. For example, a project may create package registry entries such as<br />
<br />
<syntaxhighlight lang="text"><br />
> reg query HKCU\Software\Kitware\CMake\Packages\MyPackage<br />
HKEY_CURRENT_USER\Software\Kitware\CMake\Packages\MyPackage<br />
45e7d55f13b87179bb12f907c8de6fc4<br />
REG_SZ c:/Users/Me/Work/lib/cmake/MyPackage<br />
7b4a9844£681c80ce93190d4e3185db9<br />
REG_SZ c:/Users/Me/Work/MyPackage-build<br />
</syntaxhighlight><br />
<br />
on Windows, or<br />
<br />
<syntaxhighlight lang="text"><br />
$ cat ~/.cmake/packages/MyPackage/7d1fb77e07ce59a81bed093bbee945bd<br />
/home/me/work/lLib/cmake/MyPackage<br />
$ cat ~/.cmake/packages/MyPackage/f92c1db873a1937f3100706657c63e07<br />
/home/me/work/MyPackage~build<br />
</syntaxhighlight><br />
<br />
on UNIX. The command find_package(MyPackage) will search the registered locations for package configuration files. The search order among package registry entries for a single package is unspecified. Registered locations may contain package version files to tell find_package whether a specific location is suitable for the version requested.<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:MastringCmakeVersion31]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=MastringCmakeVersion31:Chapter_04&diff=5602MastringCmakeVersion31:Chapter 042020-09-21T12:00:05Z<p>Onionmixer: CMAKE Chapter 4</p>
<hr />
<div>==CHAPTER FOUR : WRITING CMAKELISTS FILES==<br />
<br />
This chapter will cover the basics of writing effective CMakeLists files for your software. It will cover the basic commands and issues you will need to handle most projects. It will also discuss how to convert existing UNIX or Windows projects into CMakeLists files. While CMake can handle extremely complex projects, for most projects you will find this chapter's contents will tell you all you need to know. CMake is driven by the CMakeLists.txt files written for a software project. The CMakeLists files determine everything from which options to present to users, to which source files to compile. In addition to discussing how to write a CMakeLists file, this chapter will also cover how to make them robust and maintainable. The basic syntax of a CMakeLists.txt file and key concepts of CMake have already been discussed in chapters 2 and 3. This chapter will expand on those concepts and introduce a few new ones.<br />
<br />
===CMake Language===<br />
<br />
As discussed in Chapter 2, CMakeLists files follow a simple syntax consisting of comments, commands, and whitespace. A comment is indicated using the # character and runs from that character until the end of the line. A command consists of the command name, opening parenthesis, whitespace-separated arguments, and a closing parenthesis. All whitespace (spaces, line feeds, tabs) is ignored except to separate arguments. Anything within a set of double quotes is treated as one argument, as is typical for most languages. The backslash can be used to escape characters, preventing the normal interpretation of them. The subsequent examples in this chapter will help to clear up some of these syntactic issues. You might wonder why CMake decided to have its own language instead of using an existing one such as Python, Java, or Tcl. The main reason is that we did not want to make CMake require an additional tool to run. By requiring one of these other languages, all users of CMake would be required to have that language installed, and potentially a specific version of that language. This is on top of the language extensions that would be required to do some of the CMake work, for both performance and capability reasons.<br />
<br />
===Basic Commands===<br />
<br />
While the previous chapters have already introduced many of the basic commands for CMakeLists files, this chapter will review and expand on them. The top-level CMakeLists file should call the PROJECT() (page 327) command. This command both names the project and optionally specifies which languages will be used by it:<br />
<br />
<syntaxhighlight lang="text"><br />
project (projectname [C] [CXX] [Fortran] [NONE])<br />
</syntaxhighlight><br />
<br />
If no languages are specified then CMake defaults to supporting C and C++. If the NONE language is passed then CMake does not include language-specific support.<br />
<br />
For each directory in a project where the CMakeLists.txt file invokes the project command, CMake generates a top-level IDE project file. The project will contain all targets that are in the CMakeLists.txt file and any subdirectories, as specified by the add_s ubdi rectory() (page 277) command. If the EXCLUDE_FROM_ALL (page 569) option is used in the add_subdirectory command, the generated project will not appear in the top-level Makefile or IDE project file; this is useful for generating sub-projects that do not make sense as part of the main build process. Consider that a project with a number of examples could use this feature to generate the build files for each example with one run of CMake, but not have the examples built as part of the normal build process.<br />
<br />
The set and unset commands manipulate variables and entries in the persistent cache. The string() (page 335), list() (page 323), remove() (page 349), and separate_arguments() (page 329) commands offer basic manipulation of strings and lists.<br />
<br />
The add_executable() (page 273) and add_library() (page 274) commands are the main commands for defining the libraries and executables to build, and which source files comprise them. For Visual Studio projects, the source files will show up in the IDE as usual, but any header files the project uses will not be. To have the header files show up, simply add them to the list of source files for the executable or library; this can be done for all generators. Any generators that do not use the header files directly (such as Makefile based generators) will simply ignore them.<br />
<br />
<br />
===Flow Control===<br />
<br />
The CMake language provides three flow control constructs:<br />
<br />
* Conditional statements (e.g. if() (page 313))<br />
* Looping constructs (e.g. foreach() (page 309) and while() (page 345))<br />
* Procedure definitions (e.g. macro() (page 324) and function() (page 309))<br />
<br />
First we will consider the if command. In many ways, the if command in CMake is just like the if command in any other language. It evaluates its expression and uses it to execute the code in its body or optionally the code in the else() (page 284) clause. For example:<br />
<br />
<syntaxhighlight lang="text"><br />
if (FOO)<br />
# do something here<br />
else ()<br />
# do something here<br />
endif ()<br />
</syntaxhighlight><br />
<br />
The condition in the if statement may optionally be repeated in the else and endif() (page 285) clauses:<br />
<br />
<syntaxhighlight lang="text"><br />
if (FOO)<br />
# do something here<br />
else (FOO)<br />
# do something here<br />
endif (FOO)<br />
</syntaxhighlight><br />
<br />
In this book, you will see examples of both styles. When you include conditionals in the else and endif clause then they must exactly match the original conditional of the if statement. The following code would not work:<br />
<br />
<syntaxhighlight lang="text"><br />
set (FOO 1)<br />
if (${FOO})<br />
# do something<br />
endif (1)<br />
# ERROR, it dosen't match the original if conditional<br />
</syntaxhighlight><br />
<br />
CMake provides verbose error messages in cases where an if statement is not properly matched with an endif.<br />
<br />
CMake also supports elseif() (page 284) to help sequentially test for multiple conditions. For example:<br />
<br />
<syntaxhighlight lang="text"><br />
if (MSVC80)<br />
# do something here<br />
elseif (MSVC90)<br />
# do something here<br />
elseif (APPLE)<br />
# do something here<br />
endif ()<br />
</syntaxhighlight><br />
<br />
The if command documents the many conditions it can test. Some of the more common conditions include:<br />
<br />
''if (constant)'' True if the constant is 1, ON, YES, TRUE, Y, or a non-zero number. False if the constant is 0, OFF, NO, FALSE, N, IGNORE; is an empty string, or ends in the suffix "-NOTFOUND." Named boolean constants are case-insensitive. If the argument is not one of these constants then it is treated as a variable.<br />
''if (variable)'' True if the variable is defined to a value that is not a false constant.<br />
''if (NOT <expression>)'' True if the expression is not true.<br />
''if (<expr1> AND <expr2>)'' True if both expressions would be considered true individually.<br />
''if (<expr1> OR <expr2>)'' True if either expression would be considered true individually.<br />
''if (DEFINED variable)'' True if the given variable has been set, regardless of what value it was set to.<br />
''if (<variablelstring> MATCHES regex)'' True if the given string or variable's value matches the regular given expression.<br />
<br />
Additional binary test operators include EQUAL, LESS, and GREATER for numeric comparisons; STRLESS, STREQUAL, and STRGREATER for lexicographic comparisons; and VERSION_LESS, VERSION_EQUAL, and VERSION_GREATER to compare versions of the form major[.minor[.patch[.tweak]]].<br />
<br />
The OR test has the lowest precedence, followed by AND, then NOT, and then any other test. Tests of the same precedence are performed from left-to-right. Expressions may be enclosed in parentheses to adjust precedence. For example, consider the following conditionals<br />
<br />
<syntaxhighlight lang="text"><br />
if ((1 LESS 2) AND (3 LESS 4))<br />
message ("sequence of numbers")<br />
endif ()<br />
<br />
if (1 AND 3 AND 4)<br />
message ("series of true values")<br />
endif (1 AND 3 AND 4)<br />
<br />
if (NOT 0 AND 3 AND 4)<br />
message ("a false value")<br />
endif (NOT 0 AND 3 AND 4)<br />
<br />
if (0 OR 3 AND 4)<br />
message ("or statements")<br />
endif (0 OR 3 AND 4)<br />
<br />
if (EXISTS ${PROJECT_SOURCE_DIR}/Help.txt AND COMMAND IF)<br />
message ("Help exists")<br />
endif (EXISTS ${PROJECT_SOURCE_DIR}/Help.txt AND COMMAND IF)<br />
<br />
set (fooba 0)<br />
<br />
if (NOT DEFINED foobar)<br />
message ("foobar is not defined")<br />
endif (NOT DEFINED foobar)<br />
<br />
if (NOT DEFINED fooba)<br />
message ("fooba not defined")<br />
endif (NOT DEFINED fooba)<br />
<br />
if (NOT 0 AND 0)<br />
message ("This line is never executed")<br />
endif (NOT 0 AND 0}<br />
<br />
if (NOT (0 AND 0))<br />
message ("This line is always executed")<br />
endif (NOT (0 AND 0))<br />
</syntaxhighlight><br />
<br />
Now let us consider the other flow control commands. The foreach, while, macro, and function commands are the best way to reduce the size of your CMakeLists files and keep them maintainable. The foreach() (page 309) command enables you to execute a group of CMake commands repeatedly on the members of a list. Consider the following example adapted from VTK<br />
<br />
<syntaxhighlight lang="text"><br />
foreach (tfile<br />
TestAnisotropicDiffusion2D<br />
TestButterworthLowPass<br />
TestButterworthHighPass<br />
TestCityBlockDistance<br />
TestConvolve<br />
)<br />
add_test (${tfile)-image $ (VTK_EXECUTABLE}<br />
${VTK_SOURCE_DIR}/Tests/rtimageTest.tcl<br />
${VTK_SOURCE_DIR}/Tests/${tfile}.tcl<br />
-D ${VTK_DATA_ROOT}<br />
-V Baseline/Imaging/S{tfile}.png<br />
-A $(VTK_SOURCE_DIR}/Wrapping/Tcl<br />
)<br />
endforeach ( tfile } <br />
</syntaxhighlight><br />
<br />
The first argument of the foreach command is the name of the variable that will take on a different value with each iteration of the loop; the remaining arguments are the list of values over which to loop. In this example, the body of the foreach loop is j ust one CMake command, add_test. In the body of the foreach loop, each time the loop variable (tfile in this example) is referenced will be replaced with the current value from the list. In the first iteration, occurrences of $ {tfile} will be replaced with TestAnisotropicDiffusion2D. In the next iteration, ${tfile} will be rep laced with TestButterworthLowPass. The foreach loop will continue to loop until all of the arguments have been processed.<br />
<br />
It is worth mentioning that foreach loops can be nested, and that the loop variable is replaced prior to any<br />
other variable expansion. This means that in the body of a foreach loop, you can construct variable names<br />
using the loop variable. In the code below, the loop variable tfile is expanded, and then concatenated with<br />
_TEST_RESULT. The new variable name is then expanded and tested to see if it matches FAILD.<br />
<br />
<syntaxhighlight lang="text"><br />
if ($(${tfile}} TEST_RESULT) MATCHES FAILED)<br />
message ("Test ${tfile} failed.")<br />
endif ()<br />
</syntaxhighlight><br />
<br />
The while() (page 345) command provides looping based on a test condition. The format for the test expression in the while command is the same as it is for the if command, as described earlier. Consider the following example, which is used by CTest. Note that CTest updates the value of CTEST_ELAPSED_TIME internally.<br />
<br />
<syntaxhighlight lang="text"><br />
#####################################################<br />
# run paraview and ctest test dashboards for 6 hours<br />
#<br />
while (${CTEST_ELAPSED_TIME} LESS 36000)<br />
set (START_TIME ${CTEST_ELAPSED_TIME})<br />
ctest_run_script { "dash1_ParaView_vs7icontinuous.cmake" )<br />
ctest_run_script { "dash1_cmake_vs7lcontinuous.cmake" )<br />
endwhile {)<br />
</syntaxhighlight><br />
<br />
The foreach and while commands allow you to handle repetitive tasks that occur in sequence, whereas the macro and function commands support repetitive tasks that may be scattered throughout your CMakeLists files. Once a macro or function is defined, it can be used by any CMakeLists fi les processed after its definition.<br />
<br />
A function in CMake is very much like a function in C or C++. You can pass arguments into it, and they become variables within the function. Likewise, some standard variables such as ARGC, ARGV, ARGN, and ARGV0, ARGV1, etc. are defined. Function calls have a dynamic scope. Within a function you are in a new variable scope; this is like how you drop into a subdirectory using the add_subdirectory() (page 277) command and are in a new variable scope. All the variables that were defined when the function was called remain defined, but any changes to variables or new variables only exist within the function. When the function returns, those variables will go away. Put more simply: when you invoke a function, a new variable scope is pushed; when it returns, that variable scope is popped.<br />
<br />
The function() (page 309) command defines a new function. The first argument is the name of the function to define; all additional arguments are formal parameters to the function.<br />
<br />
<syntaxhighlight lang="text"><br />
function (DetermineTime_time)<br />
# pass the result up to whatever invoked this<br />
set ($(_time) "1;23:45" PARENT_SCOPE)<br />
endfunction()<br />
<br />
# now use the function we just defined<br />
DetermineTime( current_time )}<br />
<br />
if( DEFINED current_time )<br />
message(STATUS "The time is now: ${current_time}")<br />
endif ()<br />
</syntaxhighlight><br />
<br />
Note that in this example, _time is used to pass the name of the return variable. The set() (page 330) command is invoked with the value of _time, which will be current_time. Finally, the set command uses the PARENT_SCOPE option to set the variable in the caller's scope instead of the local scope.<br />
<br />
Macros are defined and called in the same manner as functions. The main differences are that a macro does not push and pop a new variable scope, and that the arguments to a macro are not treated as variables but as strings replaced prior to execution. This is very much like the differences between a macro and a function in C or C++. The first argument is the name of the macro to create; all additional arguments are formal parameters to the macro.<br />
<br />
<syntaxhighlight lang="text"><br />
# define a simple macro<br />
macro (assert TEST COMMENT)<br />
if (NOT ${TEST))<br />
message ("Assertion failed: ${COMMENT}")<br />
endif (NOT ${TEST})<br />
endmacro (assert)<br />
<br />
# use the macro<br />
find_library (FOO_LIB foo /usr/local/lib)<br />
assert ( ${FOO_LIB} "Unable to find library foo")<br />
</syntaxhighlight><br />
<br />
The simple example above creates a macro called assert. The macro is defined into two arguments; the first is a value to test and the second is a comment to print out if the test fails. The body of the macro is a simple if() (page 313) command with a message() (page 326) command inside of it. The macro body ends when the endmacro() (page 285) command is found. The macro can be invoked simply by using its name as if it were a command. In the above example, if FOO_LIB was not found then a message would be displayed indicating the error condition.<br />
<br />
The macro command also supports defining macros that take variable argument lists. This can be useful if you want to define a macro that has optional arguments or multiple signatures. Variable arguments can be referenced using ARGC and ARGV0, ARGV1, etc., instead of the formal parameters. ARGV0 represents the first argument to the macro; ARGV1 represents the next, and so forth. You can also use a mixture of formal arguments and variable arguments, as shown in the example below.<br />
<br />
<syntaxhighlight lang="text"><br />
# define a macro that takes at least two arguments<br />
# (the formal arguments) plus an optional third argument<br />
macro (assert TEST COMMENT)<br />
if (NOT ${TEST})<br />
message ("Assertion failed: ${COMMENT}")<br />
<br />
# if called with three arguments then also write the<br />
# message to a file specified as the third argument<br />
if (${ARGC) MATCHES 3)<br />
file (APPEND ${ARGV2} "Assertion failed: ${COMMENT)")<br />
endif (${ARGC} MATCHES 3)<br />
<br />
endif (NOT ${TEST})<br />
endmacro {assertASSERT)<br />
<br />
# use the macro<br />
find_library (FOO_LIB foo /usr/local/lib)<br />
assert ( $({FOO_LIB} "Unable to find library foo" )<br />
</syntaxhighlight><br />
<br />
In this example, the two required arguments are TEST and COMMENT. These required arguments can be referenced by name, as they are in this example, or by referencing ARGV0 and ARGV1. If you want to process the arguments as a list, use the ARGV and ARGN variables. ARGV (as opposed to ARGV0, ARGV1, etc) is a list of all the arguments to the macro, while ARGN is a list of all the arguments after the formal arguments. Inside your macro, you can use the foreach command to iterate over ARGV or ARGN as desired.<br />
<br />
CMake has two commands for interrupting the processing flow. The break() (page 279) command breaks out of a foreach or while loop before it would normally end. The return() (page 328) command returns from a function or listfile before the function or listfile has reached its end.<br />
<br />
<br />
===Regular Expressions===<br />
<br />
A few CMake commands, such as if() (page 313) and string() (page 335), make use of regular expressions or can take a regular expression as an argument. In its simplest form, a regular expression is a sequence of characters used to search for exact character matches. However, many times the exact sequence to be found is unknown, or only a match at the beginning or end of a string is desired. Since there are several different conventions for specifying regular expressions, CMake's standard is described below. The description is based on the open source regular expression class from Texas Instruments, which is used by CMake for parsing regular expressions.<br />
<br />
Regular expressions can be specified by using combinations of standard alphanumeric characters and the following regular expression meta-characters:<br />
<br />
<br />
'''^''' Matches at the beginning of a line or string.<br />
'''$''' Matches at the end of a line or string.<br />
'''.''' Matches any single character other than a new line.<br />
'''[ ]''' Matches any character(s) inside the brackets .<br />
'''[^ ]''' Matches any character(s) not inside the brackets.<br />
'''[ - ]''' Matches any character in range on either side of a dash.<br />
'''*''' Matches the preceding pattern zero-or more-times.<br />
'''+''' Matches the preceding pattern one-or-more times.<br />
'''?''' Matches the preceding pattern zero times or once only.<br />
'''()''' Saves a matched expression and uses it in a later replacement.<br />
'''( | )''' Matches either the left-or-right side of the bar.<br />
<br />
<br />
Note that more than one of these meta-characters can be used in a single regular expression in order to create complex search patterns. For example, the pattern [^ab1-9] says to match any character sequence that does not begin with the characters "a" or "b" or numbers in the series one through nine. The following examples may help clarify regular expression usage:<br />
<br />
* The regular expression "^hello" matches a "hello" only at the beginning of a search string. It would match "hello there," but not "hi,nhello there."<br />
* The regular expression "long$" matches a "long" only at the end of a search string. It would match "so long," but not "long ago."<br />
* The regular expression "t..t..g" will match anything that has a "t" and any two characters, followed by another "t," and any two characters, and then a "g." It would match "testing" or "test again ," but would not match "toasting."<br />
* The regular expression "[1-9ab]" matches any number one-through-nine, and the characters "a" and "b". It would match "hello 1" or "begin", but would not match "no-match" .<br />
* The regular expression "[^1-9ab]" matches any character that is not a number one-through-nine, or an "a" or "b." It would NOT match "1ab2" or "b2345a," but would match "no-match."<br />
* The regular expression "br* " matches something that begins with a "b" and is followed by zero-or-more "r"s, and ends in a space. It would match "brrrrr" and "b," but would not match "brrh."<br />
* The regular expression "br+" matche s something that begins with a "b" and is followed by one or more "r"s, and ends in a space. It would match "brrrrr," and "br," but would not match "b " or "brrh."<br />
* The regular expression "br?" m atches something that begins with a "b," is followed by zero-or-one "r"s, and ends in a space. It would match "br,", and "b ," but would not match "brrrr" or "brrh."<br />
* The regular expression "(..p)b" matches something ending with pb and beginning with the two characters before the first "p" encountered in the line. For example, it would find "repb" in "rep drepaqrepb." The regular expression "(..p)a" would find "repa qrepb" in "rep drepa qrepb."<br />
* The regular expression "d(_p)" matches something ending with "p," beginning with "d," and having two characters in-between that are the same as the two characters before the first "p" encountered in the line. It would m atch "drepa qrepb" in "rep drepa qrepb."<br />
<br />
<br />
===Checking Versions of CMake===<br />
<br />
CMake is an evolving program and as new versions are released, new features or commands are introduced. As a result, there may be instances where you might want to use a command that is in a current version of CMake but not in previous versions. There are a couple of ways to handle this; one option is to use the if() (page 313) command to check whether a new command exists. For example:<br />
<br />
<syntaxhighlight lang="text"><br />
# test if the command exists<br />
if (COMMAND some_new_command)<br />
# use the command<br />
some_new_command ( ARGS... )<br />
endif ()<br />
</syntaxhighlight><br />
<br />
Alternatively, one may test against the actual version of CMake that is being run by evaluating the CMAKE_VERSION (page 634) variable:<br />
<br />
<syntaxhighlight lang="text"><br />
# look for newer versions of CMake<br />
if (${CMAKE_VERSION} VERSION_GREATER 2.6.3)<br />
# do something special here<br />
endif ()<br />
</syntaxhighlight><br />
<br />
When writing your CMakeLists files, you may decide that you do not want to support old versions of CMake. To do this, place the following command at the top of your CMakeLists file<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required (VERSION 2.6.3)<br />
</syntaxhighlight><br />
<br />
This indicates that the person running CMake must have at least version 2.6.3. If they are running an older version of CMake, an error message will be displayed telling them that the project requires at least the specified version of CMake.<br />
<br />
Finally, some new releases of CMake might no longer support some behavior you were using (although we try to avoid this). In these cases, use CMake policies, as discussed in the cmake-policies(7) (page 537) manual.<br />
<br />
<br />
===Using Modules===<br />
<br />
Code reuse is a valuable technique in software development and CMake has been designed to support it. Allowing CMakeLists files to make use of reusable modules enables the entire community to share reusable sections of code. For CMake, these sections are called modules and can be found in the Modules subdirectory of your installation. Modules are simply sections of CMake commands put into a file; they can then be included into other CMakeLists files using the include() (page 317) command. For example, the following commands will include the CheckTypeSize module from CMake and then use the macro it defines.<br />
<br />
<syntaxhighlight lang="text"><br />
include (CheckTypeSize)<br />
check_type_size(long SIZEOF_LONG)<br />
</syntaxhighlight><br />
<br />
A module's location can be specified using the full path to the module file, or by letting CMake find the module by itself. CMake will look for modules in the directories specified by CMAKE_MODULE_PATH (page 646); if it cannot find it there, it will look in the Modules subdirectory. This way projects can override modules that CMake provides and customize them for their needs. Modules can be broken into a few main categories:<br />
<br />
'''Find Modules''' These modules support the find_package() (page 297) command to determine the location of software elements, such as header files or libraries, that belong to a given package. Do not include them directly. Use the find_package() (page 297) command. Each module comes with documentation describing the package it finds and the variables in which it provides results. Conventions used in Find modules are covered in more detail in Chapter 5.<br />
<br />
'''System Introspection Modules''' These modules test the system to provide information about the target platform or compiler, such as the size of a tloat or support for ANSI C++ streams. Many of these modules have names prefixed with Test or Check, such as TestBigEndian and CheckTypeSize. Some of them try to compile code in order to determine the correct result. In these cases, the source code is typically named the same as the module, but with a c or cxx extension. System introspection modules are covered in more detail in Chapter 12.<br />
<br />
'''Utility Modules''' These modules provide useful macros and functions implemented in the CMake language and intended for specific, common use cases. See documentation of each module for details.<br />
<br />
====Using CMake with SWIG====<br />
<br />
One example of how modules can be used is to look at wrapping your C/C++ code in another language using Simplified Wrapper and Interface Generator (SWIG; www.swig.org). SWIG is a tool that reads annotated C/C++ header files and creates wrapper code (glue code) to make the corresponding C/C++ libraries available to other programming languages such as Tcl, Python, or Java. CMake supports SWIG with the find_package() (page 297) command. Although it can be used from CMake with custom commands, the SWIG package provides several macros that make building SWIG projects with CMake simpler. To use the SWIG macros, you must first call the find_package command with the name SWIG. Then, include the file referenced by the variable SWIG_USE_FILE. This will define several macros and set up CMake to easily build SWIG-based projects.<br />
<br />
Two very useful macros are SWIG_ADD_MODULE and SWIG_LINK_LIBRARIES. SWIG_ADD_MODULE works much like the add_library() (page 274) command in CMake. The command is invoked like this:<br />
<br />
<syntaxhighlight lang="text"><br />
SWIG_ADD_MODULE (module_name language source1 source2 ... sourceN)<br />
</syntaxhighlight><br />
<br />
The first argument is the name of the module being created. The next argument is the target language SWIG is producing a wrapper for. The rest of the arguments consist of a list of source files used to create the shared module. The big difference is that SWIG .i interface files can be used directly as sources. The macro will create the correct custom commands to run SWIG, and generate the C or C++ wrapper code from the SWIG interface files. The sources can also be regular C or C++ files that need to be compiled in with the wrappers.<br />
<br />
The SWIG_LINK_LIBRARIES macro is used to link support libraries to the module. This macro is used because depending on the language being wrapped by SWIG, the name of the module may be different. The actual name of the module is stored in a variable called SWIG_MODULE_${name}_REAL_NAME where ${name} is the name passed into the SWIG_ADD_MODULE macro. For example, SWIG_ADD_MODULE(foo tcl foo.i) creates a variable called SWIG_MODULE_foo_REAL_NAME, which contains the name of the actual module created.<br />
<br />
Now consider the following example that uses the example found in SWIG under Examples/python/class.<br />
<br />
<syntaxhighlight lang="text"><br />
# Find SWIG and include the use swig file<br />
find_package (SWIG REQUIRED)<br />
include (${SWIG_USE_FILE})<br />
<br />
# Find python library and add include path for python headers<br />
find_package (PythonLibs)<br />
include_directories (${PYTHON INCLUDE_PATH) )<br />
<br />
# set the global swig flags to empty<br />
set (CMAKE_SWIG_PLAGS "")<br />
<br />
# let swig know that example.i is c++ and add the -includeall<br />
# flag to swig<br />
set_source_files_properties (example.i PROPERTIES CPLUSPLUS ON)<br />
set_source_files_properties (example.i<br />
PROPERTIES SWIG_FLAGS "-includeall")<br />
<br />
# Create the swig module called example<br />
# using the example.i source and example.cxx<br />
# swig will be used to create wrap_example.cxx from example.i<br />
SWIG_LADD_MODULE (example python example.i example.cxx)<br />
SWIG_LINK_LIBRARIES (example ${PYTHON_LIBRARIES})<br />
</syntaxhighlight><br />
<br />
This example first uses find_package to locate SWIG, and includes the SWIG_USE_FILE defining the SWIG CMake macros. It then finds the Python libraries and sets up CMake to build with the Python library. Notice that the SWIG input file "example.i" is used like any other source file in CMake, and the properties are set on the file tell ing SWIG that the file is C++ and that the SWIG flag -includeall should be used when running SWIG on that source file. The module is created by telling SWIG the name of the module, the target language, and the list of source files. Finally, the Python libraries are linked to the module.<br />
<br />
====Using CMake with Qt====<br />
<br />
Projects using the popular widget toolkit Qt from Nokia (qt.nokia.com) can be built with CMake. CMake supports multiple versions of Qt, including versions 3 and 4. The first step is to tell CMake which version(s) of Qt to look for. Many Qt applications are designed to work with Qt3 or Qt4, but not both. If your application is designed for Qt4, use the FindQt4 module; for Qt3 , use the FindQt3 module. If your project can work with either version of Qt then use the generic FindQt module. All of the modules provide helpful tools for building Qt projects. The following is a simple example of building a project that uses Qt4.<br />
<br />
<syntaxhighlight lang="text"><br />
find_package (Qt4 REQUIRED)<br />
<br />
include ($(QT_USE_FILE})<br />
<br />
# what are our ui files?<br />
set (QTUI_SRCS qtwrapping.ui)<br />
OT4_WRAP_UI (QTUI_H_SRCS $(QTUI_SRCS})<br />
QT4_WRAP_CPP (QT_MOC_SRCS TestMoc.h)<br />
<br />
add_library (myqtlib ${QTUI_H_SRCS) ${QT_MOC_SRCS})<br />
target_link_libraries (myqtlib ${QT_LIBRARIES))<br />
<br />
add_executable (qtwrapping qtwrappingmain.cxx)<br />
target_link_libraries (qtwrapping myqtlib)<br />
</syntaxhighlight><br />
<br />
In addition to explicitly listing Qt MOC sources. CMake also has a feature called automoc which automatically scan all source files for moc contructs and runs moc accordingly. To change the above example to use automoc, simply turn the automoc property on for the library and remove the QT4_WRAP_CPP(QT_MOC_SRCS TestMoc.h) line.<br />
<br />
<syntaxhighlight lang="text"><br />
set_target_properties(foo myqtlib PROPERTIES AUTOMOC TURE)<br />
</syntaxhighlight><br />
<br />
For more information about automoc, see the documentation in the vari ables section about variables with _AUTOMOC_ in them.<br />
<br />
<br />
====Using CMake with FLTK====<br />
<br />
CMake also supports the The Fast Light Toolkit (FLTK) with special commands. The FLTK_WRAP_UI command is used to run the FLTK fluid program on a .fl file and produce a C++ source file as part of the build. The following example shows how to use FLTK with CMake.<br />
<br />
<syntaxhighlight lang="text"><br />
find_package (FLTK)<br />
if (FLTK_FOUND)<br />
set (FLTK_SRCS<br />
fltk1.fl<br />
)<br />
fltk_wrap_ui (wraplibFLTK $(FLTK_SRCS})<br />
add_library (wraplibFLTK ${wraplibFLTK_UI_SRCS})<br />
endif (FLTK_FOUND)<br />
</syntaxhighlight><br />
<br />
<br />
===Poicies===<br />
<br />
Occasionally a new feature or change is made to CMake that is not fully backwards compatible with older versions. This can create problems when someone tries to use an old CMakeLists file with a new version of CMake. To help both end users and developers through such issues, we have introduced policies. Policies are a mechanism for helping improve backwards compatibility and tracking compatibility issues between different versions of CMake.<br />
<br />
====Design Goals====<br />
<br />
There were four main design goals for the CMake policy mechanism:<br />
<br />
1. Existing projects should build with newer versions of CMake than that used by the project authors.<br />
* Users should not need to edit code to get the projects to build.<br />
* Warnings may be issued but the projects should build.<br />
<br />
2. Correctness of new interfaces or bug fixes in old interfaces should not be inhibited by compatibility<br />
requirements. Any reduction in correctness of the latest interface is not fair on new projects.<br />
<br />
3. Every change made to CMake that may require changes to a project's CMakeLists files should be documented.<br />
* Each change should also have a unique identifier that can be referenced with warning and error messages.<br />
* The new behavior is enabled only when the project has somehow indicated it is supported.<br />
<br />
4. We must be able to eventually remove code that implements compatibility with ancient CMake versions.<br />
* Such removal is necessary to keep the code clean and to allow for internal refactoring.<br />
* After such removal, attempts at building projects written for ancient versions must fail with an informative message.<br />
<br />
All policies in CMake are assigned a name in the form CMPNNNN where NNNN is an integer value. Policies typically support both an old behavior that preserves compatibility with earlier versions of CMake, and a new behavior that is considered correct and preferred for use by new projects. Every policy has documentation detailing the motivation for the change, and the old and new behaviors.<br />
<br />
====Setting Policies====<br />
<br />
Projects may configure the setting of each policy to request old or new behaviors. When CMake encounters user code that may be affected by a particular policy, it checks to see whether the project has set the policy. If the policy has been set (to OLD or NEW) then CMake follows the behavior specified. If the policy has not been set then the old behavior is used, but a warning is issued telling the project author to set the policy.<br />
<br />
There are a couple ways to set the behavior of a policy. The quickest way is to set all policies to a version that corresponds to the release version of CMake the project was written in. Setting the policy version requests the new behavior for all policies introduced in the corresponding version of CMake or earlier. Policies introduced in later versions are marked as "not set" in order to produce proper warning messages. The policy version is set using the cmake_policy() (page 280) command's VERSION signature. For example, the code<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_policy (VERSION 2.6)<br />
</syntaxhighlight><br />
<br />
will request the new behavior for all policies introduced in CMake 2.6 or earlier. The cmake_minimum_required() (page 280) command will also set the policy version, which is convenient for use at the top of projects. A project should typically begin with the lines<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required (VERSION 2.6)<br />
project (MyProject)<br />
# ...code using CMake 2.6 policies<br />
</syntaxhighlight><br />
<br />
Of course, one should replace "2.6" with the version of CMake you are currently writing to. You can also set each policy individually if you wish; this is sometimes helpful for project authors who want to incrementally convert their projects to use a new behavior, or silence warnings about dependence on an old behavior. The cmake_policy command's SET option may be used to explicitly request old or new behavior for a particular policy.<br />
<br />
For example, CMake 2.6 introduced the policy CMP0002 (page 538), which requires all logical target names to be globally unique (duplicate target names previously worked by accident in some cases, but were not diagnosed). Projects using duplicate target names and working accidentally will receive warnings referencing the policy. The warnings may be silenced with the code<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_policy (SET CMP0002 OLD)<br />
</syntaxhighlight><br />
<br />
which exp licitly tel l s CMake to use the old behavior for the policy (silently accepting duplicate target names). Another option is to use the code<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_policy (SET CMP0002 NEW)<br />
</syntaxhighlight><br />
<br />
to explicitly tell CMake to use new behavior and produce an error when a duplicate target is created. Once this is added to the project, it will not build until the author removes any duplicate target names.<br />
<br />
When a new version of CMake is released, it introduces new policies that can still build old projects, because by default they do not request NEW behavior for any of the new policies. When starting a new project, one should always specify the most recent release of CMake to be supported as the policy version level. This will ensure that the project is written to work using policies from that version of CMake and not using any old behavior. If no policy version is set, CMake will warn and assume a policy version of 2.4. This allows existing projects that do not specify cmake_minimum_required to build as they would have with CMake 2.4.<br />
<br />
====The Policy Stack====<br />
<br />
Policy settings are scoped using a stack. A new level of the stack is pushed when entering a new subdirectory of the project (with add_subdirectory() (page 277)) and popped when leaving it. Therefore, setting a policy in one directory of a project will not affect parent or sibling directories, but it will affect subdirectories.<br />
<br />
This is useful when a project contains subprojects that are maintained separately yet built inside the tree. The top-level CMakeLists file in a project may write<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_policy (VERSION 2.6)<br />
project (MyProject)<br />
add_subdirectory (OtherProject)<br />
# ... code requiring new behavior as of CMake 2.6 ...<br />
</syntaxhighlight><br />
<br />
while the OtherProject/CMakeLists.txt file contains<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_policy (VERSION 2.4)<br />
project (OtherProject)<br />
# ... code that builds with CMake 2.4 ...<br />
</syntaxhighlight><br />
<br />
This allows a project to be updated to CMake 2.6 while subprojects, modules, and included fi les continue to build with CMake 2.4 until their maintainers update them .<br />
<br />
User code may use the cmake_policy command to push and pop its own stack levels as long as every push is paired with a pop. Thi s is useful when temporarily requesting different behavior for a small section of code. For example, policy CMP0003 (page 539) removes extra link directories that used to be included when new behavior is used. When incrementally updating a project, it may be difficult to build a particular target with the remaining targets being OK. The code<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_policy (PUSH)<br />
cmake_policy (SET CMP0003 OLD) # use old-style link for now<br />
add_executable (myexe ...)<br />
cmake_policy (POP)<br />
</syntaxhighlight><br />
<br />
will silence the warning and use the old behavior for that target. You can get a list of policies and help on specific policies by running CMake from the command line as follows<br />
<br />
<syntaxhighlight lang="text"><br />
cmake --help-command cmake_policy<br />
cmake --help-policies<br />
cmake --help-policy CMP0003<br />
</syntaxhighlight><br />
<br />
<br />
====Updating a Project For a New Version of CMake====<br />
<br />
When a CMake release introduces new policies, it may generate warnings for some existing projects. These warnings indicate that changes to a project may be necessary for dealing with the new policies. While old releases of a project can continue to build with the warnings, the project development tree should be updated to take the new policies into account. There are two approaches to updating a tree: one-shot and incremental. The question of which one is easier depends on the size of the project and which new policies produce warnings.<br />
<br />
=====The One-Shot Approach=====<br />
<br />
The simplest approach to updating a project for a new version of CMake is simply to change the policy version which is set at the top of the project. Then, try building with the new CMake version to fix problems.<br />
<br />
For example, to update a project to build with CMake 2.8, one might write<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required (VERSION 2.8)<br />
</syntaxhighlight><br />
<br />
at the beginning of the top-level CMakeLists file. This tells CMake to use the new behavior for every policy introduced in CMake 2.8 and below. When building this project with CMake 2.8, no warnings will be produced regarding policies because it knows that no policies were introduced in later versions. However, if the project was depending on the old policy behavior, it may not build since CMake is now using the new behavior without warning. It is up to the project author who added the policy version line to fix these issues.<br />
<br />
=====The Incremental Approach=====<br />
<br />
Another approach to updating a project for a new version of CMake is to deal with each warning one-by-one. One advantage of this approach is that the project will continue to build throughout the process, so the changes can be made incrementally.<br />
<br />
When CMake encounters a situation where it needs to know whether to use the old or new behavior for a policy, it checks whether the project has set the policy. If the policy is set, CMake silently uses the corresponding behavior. If the policy is not set, CMake uses the old behavior but warns the author that the policy is not set.<br />
<br />
In many cases, a warning message will point to the exact line of code in the CMakeLists files that caused the warning. In some cases, the situation cannot be diagnosed until CMake is generating the native build system rules for the project, so the warning will not include explicit context information. In these cases, CMake will try to provide some information about where code may need to be changed. The documentation for these "generation-time" policies should indicate the point in the project code where the policy should be set to take effect.<br />
<br />
In order to incrementally update a project, one warning should be addressed at a time. Several cases may occur, as described below.<br />
<br />
=====Silence a Warning When the Code is Correct=====<br />
<br />
Many policy warnings may be produced simply because the project has not set the policy even though the project may work correctly with the new behavior (there is no way for CMake to know the difference). For a warning about some policy, CMP<NNNN>, you can check whether this is the case by adding<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_policy (SET CMP<NNNN> NEW)<br />
</syntaxhighlight><br />
<br />
to the top of the project and trying to build it. If the project builds correctly with the new behavior, move on to the next policy warning. If the project does not build correctly, one of the other cases may apply.<br />
<br />
=====Silence a Warning Without Updating the Code=====<br />
<br />
Users can suppress all instances of a warning CMP<NNNN> by adding<br />
<br />
<syntaxhighlight lang="text"><br />
</syntaxhighlight><br />
<br />
to the top of a project. However, we encourage project authors to update their code to work with the new behavior for all policies. This is especially important because versions of CMake in the (distant) future may remove support for old behaviors and produce an error for projects requesting them (which tells the user to get an older versions of CMake to build the project).<br />
<br />
=====Silence a Warning by Updating Code=====<br />
<br />
When a project does not work correctly with the NEW behaviors for a policy, the code needs to be updated. In order to deal with a warning for some policy CMP<NNNN>,add<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_policy (SET CMP<NNNN> NEW)<br />
</syntaxhighlight><br />
<br />
to the top of the project and then fix the code to work with the NEW behavior.<br />
<br />
If many instances of the warning occur fixing all of them simultaneously may be too difficult: instead, a developer may fix them one at a time by using the PUSH/POP signatures of the cmake_policy command:<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_policy (PUSH)<br />
cmake_policy (SET CMP<NNNN> NEW)<br />
# ... code updated for new policy behavior ...<br />
cmake_policy (POP)<br />
</syntaxhighlight><br />
<br />
This will request the new behavior for a small region of code that has been fixed. Other instances of the policy warning may still appear and must be fixed separately.<br />
<br />
<br />
=====Updating the Project Policy Version=====<br />
<br />
After addressing all policy warnings and getting the project to build cleanly with the new CMake version one step remains. The policy version set at the top of the project should now be updated to match the new CMake version, just as in the one,shot approach described above. For example, after updating a project to build cleanly with CMake 2.8, users may update the top of the project with the line<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required(VERSION 2.8)<br />
</syntaxhighlight><br />
<br />
This will set all policies introduced in CMake 2.8 or below to use the new behavior. Then users m ay sweep through the rest of the code and remove the calls that use the cmake_policy command to request the new behavior incremental ly. The end result should look the same as the one,shot approach, but could be attained step-by-step.<br />
<br />
<br />
=====Supporting Multiple CMake Versions=====<br />
<br />
Some projects might want to support a few releases of CMake simultaneously. The goal is to build with an older version, while also working with newer versions without warnings. In order to support both CMake 2.4 and 2.6, one may write code like<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required {VERSION 2.4)<br />
if (COMMAND cmake_policy)<br />
# policy settings ...<br />
cmake_policy (SET CMP0003 NEW)<br />
endif (COMMAND cmake_policy)<br />
</syntaxhighlight><br />
<br />
This will set the policies to build with CMake 2.6 and to ignore them for CMake 2.4. In order to support both CMake 2.6 and some policies of CMake 2.8, one may write code like:<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required (VERSION 2.6)<br />
if (POLICY CMP1234)<br />
# policies not known to CMake 2.6...<br />
cmake_policy (SET CMP1234 NEW)<br />
endif (POLICY CMP1234)<br />
</syntaxhighlight><br />
<br />
This will set the policies to build with CMake 2.8 and to ignore them for CMake 2.6. If it is known that the project builds with both CMake 2.6 and CMake 2.8's new policies users may write:<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required (VERSION 2.6)<br />
if (NOT ${CMAKE_VERSION} VERSION_LESS 2.8)<br />
cmake_policy (VERSION 2.8)<br />
endif ()<br />
</syntaxhighlight><br />
<br />
<br />
=====Linking Libraries=====<br />
<br />
In CMake 2.6 and later, a new approach to generating link lines for targets has been implemented. Consider these libraries:<br />
<br />
<syntaxhighlight lang="text"><br />
/path/to/libfoo.a<br />
/path/to/libfoo.so<br />
</syntaxhighlight><br />
<br />
Previously, if someone wrote<br />
<br />
<syntaxhighlight lang="text"><br />
target_link_libraries (myexe /path/to/libfoo.a)<br />
</syntaxhighlight><br />
<br />
CMake would generate this code to link it:<br />
<br />
<syntaxhighlight lang="text"><br />
... -L/path/to -Wl, -Bstatic -lfoo -Wl, -Bdynamic ...<br />
</syntaxhighlight><br />
<br />
This worked most of the time, but some platforms (such as Mac OS X) do not support the -Bstatic or equivalent flag. This made it impossible to link to the static version of a library without creating a symlink in another directory and using that one instead. Now CMake will generate this code:<br />
<br />
<syntaxhighlight lang="text"><br />
... /path/to/libfoo.a ...<br />
</syntaxhighlight><br />
<br />
This guarantees that the correct library is chosen. However, there are some caveats to keep in mind. In the past, a project could write this (incorrect) code and it would work by accident<br />
<br />
<syntaxhighlight lang="text"><br />
add_executable (myexe myexe.c)<br />
target_link_libraries (myexe /path/to/libA.so B)<br />
</syntaxhighlight><br />
<br />
Here B is meant to link /path/to/libB.so. This code is incorrect because it asks CMake to link to B, but does not provide the proper linker search path for it. It used to work by accident because the -L/path/to would get added as part of the implementation of linking to A. The correct code would be either<br />
<br />
<syntaxhighlight lang="text"><br />
link_directories (/path/to)<br />
add_executable (myexe myexe.c)<br />
target_link_libraries (myexe /path/to/libA.so B)<br />
</syntaxhighlight><br />
<br />
or even better<br />
<br />
<syntaxhighlight lang="text"><br />
add_executable (myexe myexe.c)<br />
target_link_libraries (myexe /path/to/libA.so /path/to/libB.so)<br />
</syntaxhighlight><br />
<br />
<br />
=====Linking to System Libraries=====<br />
<br />
System libraries on UNIX-like systems are typically provided in /usr/lib or /lib. These directories are considered implicit linker search paths because linkers automatically search these locations, even without a flag like -L/usr/lib. Consider the code<br />
<br />
<syntaxhighlight lang="text"><br />
find_library (M_LIB m)<br />
target_link_libraries (myexe ${M_LIB})<br />
</syntaxhighlight><br />
<br />
Typically the find_library command would find the math library /usr/lib/libm.so, but some platforms provide multiple versions of libraries correesponding to different architectures. For example, on an IRIX machi ne one might find the libraries<br />
<br />
<syntaxhighlight lang="text"><br />
/usr/lib/libm.so (ELF o32)<br />
/usr/lib32/libm.so (ELF n32)<br />
/usr/lib64/libm.so (ELF 64)<br />
</syntaxhighlight><br />
<br />
On a Solaris machine one might find:<br />
<br />
<syntaxhighlight lang="text"><br />
/usr/lib/libm.so (sparcv8 architecture)<br />
/usr/lib/sparcv9/libm.so (sparcv9 architecture)<br />
</syntaxhighlight><br />
<br />
Unfortunately, find_library may not know about all of the architecture-specific system search paths used by the linker. In fact, when it finds /usr/lib/libm.so, it may be finding a library with the incorrect architecture. If the link computation were to produce the line<br />
<br />
<syntaxhighlight lang="text"><br />
... /usr/lib/lim.so ...<br />
</syntaxhighlight><br />
<br />
the linker might complain if /usr/lib/libm.so does not match the architecture it wants. One solution to this problem is to have the link computation recognize that the library is in a system directory and ask the linker to search for the library. It could produce the link line<br />
<br />
<syntaxhighlight lang="text"><br />
... -lm ...<br />
</syntaxhighlight><br />
<br />
and the linker would search through its architecture-specific implicit link directories to find the correct l ibrary. Unfortunately, this solution suffers from the original problem of distinguishing between static and shared versions. In order to ask the linker to find a static system library with the correct architecture, it must produce the link line<br />
<br />
<syntaxhighlight lang="text"><br />
... -Wl,-Bstatic -lm ... -Wl,-Bshared ...<br />
</syntaxhighlight><br />
<br />
Since not all platforms support such flags, CMake compromises. Libraries that are not in implicit system locations are linked by passing the full library path to the linker. Libraries that are in implicit system locations (such as /usr/lib) are linked by passing the -l option if a flag like -Bstaticis available, and by passing the full library path to the linker otherwise.<br />
<br />
<br />
====Specifying Optimized or Debug Libraries with a Target====<br />
<br />
On Windows platforms, users are often required to link debug libraries with debug libraries, and optimized libraries with optimized libraries. CMake helps satisfy this requirement with the target_link_libraries() (page 340) command, which accepts an optional flag labeled as debug or optimized. If a library is preceded with either debug or optimized, then that library will only be linked in with the appropriate configuration type. For example<br />
<br />
<syntaxhighlight lang="text"><br />
add_executable (foo foo.c)<br />
target_link_libraries (foo debug libdebug optimized libopt)<br />
</syntaxhighlight><br />
<br />
In this case, foo will be linked against libdebug if a debug build was selected, or against libopt if an optimized build was selected.<br />
<br />
<br />
===Advanced Linking===<br />
<br />
I n CMake library dependencies are transitive by default no matter if they are dynamic or static. When a target is linked with another target it will inherit all the libraries linked to this target, and they will appear on the link line for the other target too.<br />
<br />
This behavior can be changed by setting a targets LINK_INTERFACE_LIBRARIES (page 598) property. If set, only targets listed in LINK_INTERFACE_LIBRARIES will used as the set of transitive link dependencies for a target. CMake provides two convenient ways to set the LINK_INTERFACE_LIBRARIES<br />
<br />
<syntaxhighlight lang="text"><br />
target_link_libraries(<target> LINK_INTERFACE_LIBRARIES<br />
[[debug|optimized|general] <lib>] ...)<br />
</syntaxhighlight><br />
<br />
The LINK_INTERFACE_LIBRARIES mode the appends the libraries to LINK_INTERFACE_LIBRARIES and its per-configuration equivalent target properties in stead of using them for linking. Libraries specified as "debug" are appended to the LINK_INTERFACE_LIBRARIES_DEBUG property (or to the properties corresponding to configurations listed in the DEBUG_CONFIGURATIONS (page 563) global property if it is set). Libraries specified as "optimized" are appended to the LINK_INTERFACE_LIBRARIES property. Libraries specified as "general" (or without any keyword) are treated as if specified for both "debug" and "optimized".<br />
<br />
<syntaxhighlight lang="text"><br />
target_link_libraries (<target><br />
<LINK_PRIVATE|LINK_PUBLIC><br />
[{debug|optimized|general] <lib>] ...<br />
[<LINK_PRIVATE|LINK_PUBLIC><br />
[[debug|optimized|general] <lib>] ...])<br />
</syntaxhighlight><br />
<br />
The LINK_PUBLIC and LINK_PRIVATE modes can be used to specify both the link dependencies and the link interface in one command. Libraries and targets following LINK_PUBLIC are linked to, and are made part of the LINK_INTERFACE_LIBRARIES. Libraries and targets following LINK_PRIVATE are linked to, but are not made part of the LINK_INTERFACE_LIBRARIES. Using LINK_PUBLIC and LINK_PRIVATE causes all other libraries (before and after) linked to a target to be private unless they are explicitly stated to be LINK_PUBLIC.<br />
<br />
CMake will also propagate "usage requirements" from linked library targets. Usage requirements affect compilation of sources in the <target>. They are specified by properties defined on linked targets. During generation of the build system, CMake integrates usage requirement property values with the corresponding build properties for <target>:<br />
<br />
INTERFACE_COMPILE_DEFINITIONS (page 592): Appends to COMPILE_DEFINITIONS (page 616)<br />
<br />
INTERFACE_INCLUDE_DIRECTORIES (page 593): Appends to INCLUDE_DIRECTORIES (page 570)<br />
<br />
INTERFACE_POSITION_INDEPENDENT_CODE (page 594): Sets:prop_tgt: ''POSITION_INDEPENDENT_CODE'' or checked for consistency with existing value<br />
<br />
For example to specify include directories that are required when linking to a library you would can do the following<br />
<br />
<syntaxhighlight lang="text"><br />
add_library (foo foo.cxx)<br />
set_property (TARGET foo APPEND PROPERTY<br />
INTERFACE_INCLUDE_DIRECTORIES "${CMAKE_CURRENT_BINARY_DIR}"<br />
"${CMAKE_CURRENT_SOURCE_DIR}")<br />
</syntaxhighlight><br />
<br />
Now anything that links to the target foo will automatically have foo's binary and source as include directories. The order of the include directories brought in through "usage requirements" will match the order of the targets in the target_link_libraries call.<br />
<br />
<br />
===Object Libraries===<br />
<br />
Before version 2.8.8, CMake had no way to encapsulate numerous libraries into one combined library. Previously you would have to compile each individual library and the combined library. This is okay if the compilation time was low and each library used the same preprocessor definitions, include directories, and flags.<br />
<br />
However, large projects typically organize their source files into groups, often in separate subdirectories, that each need different include directories and preprocessor definitions. For this use case CMake has developed the concept of Object Libraries. An Object Library is a collection of source files compiled into an object file which is not linked into a library file or made into an archive.<br />
<br />
Instead other targets created by add_library or add_executable may reference the objects using an expression of the form $<TARGET_OBJECTS:name> as a source, where "name" is the target created by the add_library() (page 274) call. For example:<br />
<br />
<syntaxhighlight lang="text"><br />
add_library(A OBJECT a.cpp)<br />
add_library(B OBJECT b.cpp)<br />
add_library(Combined $<TARGET_OBJECTS:A> $<TARGET_OBJECTS:B> )<br />
</syntaxhighlight><br />
<br />
will include A and B object files in a library called Combined. Object libraries may contain only sources (and headers) that compile to object files.<br />
<br />
===Shared Libraries and Loadable Modules===<br />
<br />
Shared libraries and loadable modules are very powerful tools for software developers. They can be used to create extension modules or plugins for off-the-shelf software, and can be used to decrease the compile/link/run cycles for C and C++ programs. However, despite years of use, the cross-platform creation of shared libraries and modules remains a dark art understood by only a few developers. CMake has the ability to aid developers in the creation of shared libraries and modules. CMake knows the correct tools and flags to use in order to produce shared libraries for most modern operating systems that support them. Unfortunately, CMake cannot do all the work, and developers must sometimes alter source code and understand the basic concepts and common pitfalls associated with shared libraries before they can be used effectively. This section will describe many of the considerations required for taking advantage of shared libraries and loadable modules.<br />
<br />
A shared library should be thought of more like an executable than a static library; on most systems they actually require executable permissions to be set on the shared library file. This means that shared libraries can link to other shared libraries when they are created in the same way as an executable. Unlike a static library where the atomic unit is the object file, for shared libraries, the entire library is the atomic unit. This can cause some unexpected linker errors when converting from static to shared libraries. If an object file is part of a static library but the executable linking to the library does not use any of the symbols in that object file, then the file is simply excluded from the final linked executable. With shared libraries, all the object files that make up the library and all of the dependencies that they require come as one unit. For example, suppose you had a library with an object file defining the function DisplayOnXWindow(), which required the X11 library. If you linked an executable to that library, but did not call the DisplayOnXWindow() function, the static library version would not require X11; but the shared library version would require the X11 library. This is because a shared library has to be taken as one unit, and a static library is only an archive of object files from which linkers can choose the objects needed. This means that static linked executables can be smaller, as they only contain the object code actually used.<br />
<br />
Another difference between shared and static libraries is library order. With static libraries the order on the link line can make a difference; this is because most linkers only use the symbols that are needed in a single pass over all the given libraries. So, the library order should go from the library that uses the most other libraries to the library that does not use any other libraries. CMake will preserve and remember the order of libraries and library dependencies in a project. This means that each library in a project should use the target_link_libraries() (page 340) command to specify all of the libraries that it directly depends on. The libraries will be linked with each other for shared builds, but not static builds; however, the link information is used in static builds when executables are linked. An executable that only links library libA will get libA plus libB and libC, as long as libA's dependency on libB and libC was properly speci fied using target_link_libraries(libA libB libC).<br />
<br />
At this point, one m ight wonder why shared libraries would be preferred over static libraries. There are several reasons. First, shared libraries can decrease the compile/link/run cycle time because the linker does not have to do as much work as there are fewer decisions to be made about which object files to keep. Often times, the executable does not even need to be re-linked after the shared library is rebuilt; therefore developers can work on a library by compiling and linking only the small part of the program that is currently being developed, and then re-running the executable after each build of the shared library. Also, if a library is used by many different executables on a system, then there only needs to be one copy of the library on disk and often in memory too.<br />
<br />
In addition to the concept of a software library, shared libraries can also be used on many systems as run time loadable modules. This means that at run time, a program can load and execute object code that was not part of the original software. This allows developers to create software that is both open and closed. (For more information, see Object-Oriented Software Construction by Bertrand Meyer.) Closed software is that which cannot be modified. It has been through a testing cycle and can be certified to perform specific tasks with regression tests. However, a seemingly opposite goal is sought after by developers of object-oriented software as Open software can be extended by future developers. This can be done via inheritance and polymorphism with object systems . Shared libraries that can be loaded at run time allow for these seemingly opposing goals to be achieved in the same software package. Many common applications support the idea of plugins; the most common of these applications is the web browser. Internet Explorer uses plugins to support video over the web and 3D visualization. In addition to plugins, loadable factories can be used to replace C++ objects at run time, as is done in the Visualization Toolkit (VTK).<br />
<br />
Once it is decided that shared libraries or loadable modules are the right choice for a particular project, there are a few issues that developers need to be aware of. The first question that must be answered is "which symbols are exported by the shared library?" This may sound like a simple question, but the answer is different for each platform. On many but not all UNIX systems, the default behavior is to export all the symbols much like a static library. However, on Windows systems, developers m ust explicitly tell the linker and compiler which symbols are to be exported and imported from shared libraries. This is often a big problem for UNIX developers moving to Windows. There are two ways to tell the compiler/linker which symbols to export/import on Windows. The most common approach is to decorate the code with a Microsoft C/C++ language extension. An alternative is to create an extra file called a .def file, which is a simple ASCII file containing the names of all the symbols to be exported from a library.<br />
<br />
The Microsoft extension uses the _declspec directive. If a symbol has _declspec(dllexport) in front of it, it will be exported; if it has _declspec (dllimport), it will be imported. Since the same file may be shared during the creation and use of a library, it must be both exported and imported in the same source file. Thi s can only be done with the preprocessor. The developer can create a macro called LIBRARY_EXPORT which is defined to dllexport when building the library, and dllimport when using the library. CMake helps this process by automatically defining ${LIBNAME}_EXPORTS when building a DLL (dynamic link library, a.k.a. a shared library) on Windows.<br />
<br />
The following code snippet is from the VTK library, vtkCommon, and is included by all files in the vtkCommon library :<br />
<br />
<syntaxhighlight lang="text"><br />
if defined (WIN32)<br />
<br />
if defined (vtkCommon_EXPORTS)<br />
define VIK_COMMON_EXPORT __declspec( dilexport )}<br />
else<br />
define VTIK_COMMON_EXPORT __declispec({ dllimport }<br />
endif<br />
else<br />
define VTK_COMMON_EXPORT<br />
endif<br />
</syntaxhighlight><br />
<br />
The example checks for Windows and for the vtkCommon_EXPORTS macro provided by CMake. So, on UNIX, VTK_COMMON_EXPORT is defined to nothing; on Windows during the building of vtkCommon.dll, it is defined as _declspec(dllexport); and when the file is being used by another file, it is defined to _declspec(dllimport).<br />
<br />
More recently, Linux and other Unix systems have added linker options that allow symbols to be explicitly exported in a similar manner as Windows. CMake has a module that will allow you to use explicit symbol exports on all systems that support them. The module is GenerateExportHeader.cmake, and contains the function generate_export_header. The function will modify the CXX and C flags to turn on explicit symbol exports for the system. It will also generate a header file much like the handwritten one above, which works for Windows only. For more information, see generate_export_header in the appendix.<br />
<br />
The second approach on Windows requires a .def file to specify the symbols to be exported. This file could be created by hand, but for a large and changing C++ library, that could be time consuming and error-prone. CMake's custom commands can be used to run a pre-link program which will create a .def file from the compiled object files automatically. In the following example, a Perl script called makedef.pl is used; the script runs the DUMPBIN program on the .obj files, extracts all of the exportable symbols, and writes a .def file with the correct exports for all the symbols in the library mylib.<br />
<br />
<syntaxhighlight lang="text"><br />
----CMakeLists.txt----<br />
<br />
cmake_minimum_required (VERSION 2.6)<br />
project (myexe)<br />
<br />
set (SOURCES mylib.cxx mylib2.cxx)<br />
<br />
# create a list of all the object files<br />
string (REGEX REPLACE "\\.cxx" ".obj" OBJECTS "${SOURCES}")<br />
<br />
# create a shared library with the .def file<br />
add_library (mylib SHARED ${SOURCES}<br />
${CMAKE_CURRENT_BINARY_DIR} /mylib.def<br />
}<br />
# set the .def file as generated<br />
set_source_files_properties (<br />
${CMAKE_CURRENT_BINARY_DIR}/mylib.def<br />
PROPERTIES GENERATED 1<br />
)<br />
<br />
# create an executable<br />
add_executable (myexe myexe.cxx)<br />
<br />
# Link the executable to the dll<br />
target_link_libraries(myexe mylib)<br />
<br />
#convert to windows slashes<br />
set (OUTDIR<br />
$(CMAKE_CURRENT_BINARY_DIR}/${CMAKE_CFG_INTDIR}<br />
)<br />
<br />
string (REGEX REPLACE */* "\\\\" OUTDIR ${OUTDIR})<br />
<br />
# create a custom pre link command that runs<br />
# a perl script to create a .def file using dumpbin<br />
add_custom_command {<br />
TARGET mylib PRE_LINK<br />
COMMAND perl<br />
ARGS $(CMAKE_CURRENT_SOURCE_DIR}/makedef.pl]<br />
${CMAKE_CURRENT_BINARY_DIR}\\mylib.def mylib<br />
S${OUTDIR} $(OBJECTS}<br />
COMMENT "Create .def file"<br />
)<br />
<br />
----myexe.cxx----<br />
#include <iostream><br />
#include "mylib.h"<br />
int main()<br />
{<br />
std::cout << myTen() << "\n";<br />
std::cout << myEight() << "\n";<br />
}<br />
<br />
----mylib.cxx----<br />
int myTen()<br />
{<br />
return 10;<br />
}<br />
<br />
----mylib2.cxx----<br />
int myEight()<br />
{<br />
return 8;<br />
}<br />
</syntaxhighlight><br />
<br />
There is a significant difference between Windows and the default linker options on UNIX systems with respect to the requirements of symbols. DLLs on Windows are required to be fully resolved, meaning that they must link every symbol at creation. UNIX systems allow shared libraries to get symbols from the executable or other shared libraries at run time. On UNIX systems that support this feature, CMake will compile with the flags that allow executable symbols to be used by shared libraries. This small difference can cause large problems. A common but hard to track with DLLs occurs with C++ template classes and static members. In these instances, two DLLs can end up with separate copies of what is supposed to be a single, global static member of a class. There are also problems with this approach on most UNIX systems; the start-up time for large applications with many symbols can be long since much of the linking is deferred to run time.<br />
<br />
Another common pitfall occurs with C++ global objects. These objects require constructors to be called before they can be used. The main that links or loads C++ shared libraries MUST be linked with the C++ compiler, or globals like cout may not be initialized before they are used, causing strange crashes at start up time.<br />
<br />
Since executables that link to shared libraries must be able to find the libraries at run time, special environment variables and linker flags must be used. There are tools that can be used to show which libraries an executable is actually using. On many UNIX systems there is a tool called ldd(otool -L on Mac OS X), which shows which libraries are used by an executable. On Windows, a program called depends can be used to find the same type of information. On many UNIX systems, there are also environment variables like LD_LIBRARY_PATH that tell the program where to find the libraries at run time. Where supported CMake will add run time library path information into the linked executables, so that LD_LIBRARY_PATH is not required. This feature can be turned off by setting the cache entry CMAKE_SKIP_RPATH (page 633) to false; this may be desirable for installed software that should not be looking in the build tree for shared libraries. On Windows, there is only one PATH environment variable that is used for both DLLs and finding executables.<br />
<br />
<br />
===Shared Library Versioning===<br />
<br />
When an executable is linked to a shared library, it is important that the copy of the shared library loaded at run time matches that expected by the executable. On some UNIX systems, a shared library has an associated "soname" intended to solve this problem. When an executable links against the library, its soname is copied into the executable. At run time, the dynamic linker uses this name from the executable to search for the library.<br />
<br />
Consider a hypothetical shared library "foo" providing a few C functions that implement some functionality. The interface to foo is called an Application Programming Interface (API). If the implementation of these C functions change in a new version of foo, but the API remains the same, then executables linked against foo will still run correctly. When the API changes, old executables will no longer run with a new copy of foo; a new API version number must be associated with foo.<br />
<br />
This can be implemented by creating the origi nal version of foo with a soname and file name such as libfoo.so.1. A symbolic link such as libfoo.so -> libfoo.so.1 will allow standard linkers to work with the library and create executables. The new version of foo can be called libfoo.so.2 and the symbolic link updated so that new executables use the new library. When an old executable runs, the dynamic linker will look for libfoo.so.1, find the old copy of the library, and run correctly. When a new executable runs, the dynamic linker will look for libfoo.so.2 and correctly load the new version.<br />
<br />
This scheme can be expanded to handle the case of changes to foo that do not modify the APL We introduce a second set of version numbers that is totally independent of the first, which corresponds to the software version providing foo. For example, a larger project may have introduced the existence of the library foo starting in version 3.4. In this case, the file name for foo might be libfoo.so.3.4 , but the soname would still be libfoo.so.1 because the API for foo is still on its first version. A symbolic link from libfoo.so.1 -> libfoo.so.3.4 will allow executables linked against the library to run. When a bug is fixed in the software without changing the API to foo, then the new library file name might be libfoo.so.3.5, and the symbolic link can be updated to allow existing executables to run.<br />
<br />
CMake supports this soname-based version number encoding on platforms supporting soname natively. A target property for the shared library named VERSION (page 609) specifies the version number used to create the file name for the library. This version should correspond to that of the software package providing foo. On Windows, the VERSION property is used to set the binary image number using major.minor format. Another target property named SOVERSION (page 608) specifies the version number used to create the soname for the library. Thi s version should correspond to the API version number for foo. These target properties are ignored on platforms where CMake does not support this scheme.<br />
<br />
The following CMake code configures the version numbers of the shared library foo<br />
<br />
<syntaxhighlight lang="text"><br />
set_target_properties (foo PROPERTIES VERSION 1.2 SOVERSION 4)<br />
</syntaxhighlight><br />
<br />
This results in the following library and symbolic links:<br />
<br />
<syntaxhighlight lang="text"><br />
libfoo.so.1.2<br />
libfoo.so.4 -> libfoo.so.1.2<br />
libfoo.so -> libfoo.so.4<br />
</syntaxhighlight><br />
<br />
If only one of the two properties is specified, the other defaults to its value automatically. For example, the code<br />
<br />
<syntaxhighlight lang="text"><br />
set_target_properties (foo PROPERTIES VERSION 1.2)<br />
</syntaxhighlight><br />
<br />
results in the following shared library and symbolic link:<br />
<br />
<syntaxhighlight lang="text"><br />
libfoo.so.1.2<br />
libfoo.so -> libfoo.so.1.2<br />
</syntaxhighlight><br />
<br />
CMake makes no attempt to enforce sensible version numbers. It is up to the programmer to utilize this feature in a productive manner.<br />
<br />
<br />
===Installing Files===<br />
<br />
Software is typically installed into a directory separate from the source and build trees. This allows it to be distributed in a clean form and isolates users from the details of the build process. CMake provides the install() (page 317) command to specify how a project is to be installed. This command is invoked by a project in the CMakeLists file and tells CMake how to generate installation scripts. The scripts are executed at install time to perform the actual installation of files. For Makefile generators (UNIX, NMake, Borland, MinGW, etc.), the user simply runs make install(or nmakeinstall) and the make tool will invoke CMake's installation module. With GUI based systems (Visual Studio, Xcode, etc.), the user simply builds the target called INSTALL.<br />
<br />
Each call to the install command defines some installation rules. Within one CMakeLists file (source directory), these rules will be evaluated in the order that the corresponding commands are invoked. The order across multiple directories is not specified.<br />
<br />
The install command has several signatures designed for common installation use cases. A particular in vocation of the command specifies the signature as the first argument. The signatures are TARGETS, FILES, PROGRAMS, DIRECTORY, SCRIPT, and CODE.<br />
<br />
'''install (TARGETS...)''' Installs the binary files corresponding to targets built inside the project.<br />
<br />
'''install (FILES...)''' General-purpose file installation, which is typically used for header files, documentation, and data files required by your software.<br />
<br />
'''install (PROGRAMS...)''' Installs executable files not built by the project, such as shell scripts. This argument is identical to install(FILES) except that the default permjssions of the installed file include the executable bit.<br />
<br />
'''install (DIRECTORY...)''' This argument installs an entire directory tree. It may be used for installing directories with resources, such as icons and images.<br />
<br />
'''install (SCRIPT...)''' Specifies a user-provided CMake script file to be executed during installation . This is typically used to define pre-install or post-install actions for other rules.<br />
<br />
'''install (CODE...)''' Specifies user-provided CMake code to be executed during the installation. This is similar to install(SCRIPT) but the code is provided inline in the call as a string. The TARGETS, FILES, PROGRAMS, DIRECTORY signatures are all meant to create install rules for files. The targets, files, or directories to be installed are listed immediately after the signature name argument. Additional details can be specified using keyword arguments followed by corresponding values. Keyword arguments provided by most of the signatures are as follows.<br />
<br />
'''DESTINATION''' This argument specifies the location where the installation rule will place files, and must be followed by a directory path indicating the location. If the directory is specified as a full path, it will be evaluated at install time as an absolute path. If the directory is specified as a relative path, it will be evaluated at install time relative to the installation prefix. The prefix may be set by the user through the cache variable CMAKE_INSTALL_PREFIX (page 645). A platform-specific default is provided by CMake: /usr/local on UNIX, and <SystemDrive>/ProgramFiles/<ProjectName>" on Windows, where SystemDrive is along the lines of C : and ProjectName is the name given to the top most PROJECT() (page 327) command.<br />
<br />
'''PERMISSIONS''' This argument specifies file permissions to be set on the installed files. This option is needed only to override the default permissions selected by a particular INSTALL command signature. Valid permissions are OWNER_READ, OWNER_WRITE, OWNER_EXECUTE, GROUP_READ, GROUP_WRITE, GROUP_EXECUTE, WORLD_READ, WORLD_WRITE, WORLD_EXECUTE, SETUID, and SETGID. Some platforms do not support all of these permissions; on such platforms those permission names are ignored.<br />
<br />
'''CONFIGURATIONS''' This argument specifies a list of build configurations for which an installation rule applies (Debug, Release, etc.). For Makefile generators, the build configuration is specified by the CMAKE_BUILD_TYPE cache variable. For Visual Studio and Xcode generators, the configuration is selected when the INSTALL target is built. An installation rule will be evaluated only if the current install configuration matches an entry in the list provided to this argument. Configuration name comparison is case-insensitive.<br />
<br />
'''COMPONENT''' This argument specifies the installation component for which the installation rule applies. Some projects divide their installations into multiple components for separate packaging. For example, a project may define a Runt ime component that contains the files needed to run a tool; a Development component containing the files needed to build extensions to the tool; and a Documentation component containing the manual pages and other help files. The project may then package each component separately for distribution by installing only one component at a time. By default, all components are installed. Component-specific installation is an advanced feature intended for use by package maintainers. It requires manual invocation of the installation scripts with an argument defining the COMPONENT variable to name the desired component. Note that component names are not defined by CMake. Each project may define its own set of components.<br />
<br />
'''OPTIONAL''' This argument specifies that it is not an error if the input file to be installed does not exist. If the input file exists, it will be installed as requested. If it does not exist, it will be silently not installed.<br />
<br />
Projects typically install some of the library and executable files created during their build process. The install command provides the TARGETS signature for this purpose:<br />
<br />
<syntaxhighlight lang="text"><br />
install (TARGETS targets...<br />
([ARCHIVE | LIBRARY | RUNTIME | FRAMEWORK | BUNDLE |<br />
PRIVATE_HEADER | PUBLIC_HEADER | RESOURCE]<br />
[DESTINATION <dir>]<br />
[PERMISSIONS permissions...]<br />
[CONFIGURATIONS [(Debug|Release|...]]<br />
[COMPONENT <component >]<br />
[OPTIONAL]<br />
(EXPORT <export name>]<br />
[NAMELINK_ONLY|NAMELINK_SKIP]<br />
] [...])<br />
</syntaxhighlight><br />
<br />
The TARGETS keyword is immediately followed by a list of the targets created using add_executable() (page 273) or add_library() (page 274), which are to be installed. One or more files corresponding to each target will be installed.<br />
<br />
Files installed with this signature may be divided into three categories: ARCHIVE, LIBRARY, and RUNTIME. These categories are designed to group target files by typical installation destination. The corresponding keyword arguments are optional, but if present, specify that other arguments following them apply only to target files of that type. Target files are categorized as follows:<br />
<br />
'''executables - "RUNTIME"''' Created by add_executable (.exe on Windows, no extension on UNIX)<br />
<br />
'''loadable modules - "LIBRARY"''' Created by add_library with the MODULE option (.dll on Windows, .so on UNIX)<br />
<br />
'''shared libraries - "LIBRARY"''' Created by add_library with the SHARED option on UNIX-like platforms (.so on most UNIX, .dylib on Mac)<br />
<br />
'''dynamic-link libraries - "RUNTIME"''' Created by add_library with the SHARED option on Windows platforms (.dll)<br />
<br />
'''import libraries - "ARCHIVE"''' A linkable file created by a dynamic-link library that exports symbols (.lib on most Windows, .dll.a on Cygwin and MinGW).<br />
<br />
'''static libraries - "ARCHIVE"''' Created by add_library with the STATIC option (.lib on Windows, .a on UNIX, Cygwin, and MinGW)<br />
<br />
Consider a project that defines an executable, myExecutable, which links to a shared library mySharedLib. It also provides a static library myStaticLib and a plugin module to the executable called myPlugin that also links to the shared library. The executable, static library, and plugin file may be installed individually using the commands<br />
<br />
<syntaxhighlight lang="text"><br />
install (TARGETS myExecutable DESTINATION bin)<br />
install (TARGETS myStaticLib DESTINATION lib/myproject)<br />
install (TARGETS myPlugin DESTINATION lib)<br />
</syntaxhighlight><br />
<br />
The executable will not be able to run from the installed location until the shared library to it links to is also installed. Installation of the library requires a bit more care in order to support all platforms. It must be installed in a location searched by the dynamic linker on each platform. On UNIX-like platforms, the library is typically installed to lib, while on Windows it should be placed next to the executable in bin. An additional challenge is that the import library associated with the shared library on Windows should be treated like the static library, and installed to lib/myproject. In other words, we have three different kinds of files created with a single target name that must be installed to three different destinations! Fortunately, this problem can be solved using the category keyword arguments. The shared library may be installed using the command:<br />
<br />
<syntaxhighlight lang="text"><br />
install (TARGETS myShareLib<br />
RUNTIME DESTINATION bin<br />
LIBRARY DESTINATION lib<br />
ARCHIVE DESTINATION lib/myproject)<br />
</syntaxhighlight><br />
<br />
This tells CMake that the RUNTIME file (.dll) should be installed to bin, the LIBRARY file (.so) should be installed to lib, and the ARCHIVE(.lib) file should be installed to lib/myproject. On UNIX, the LIBRARY file will be installed; on Windows, the RUNTIME and ARCHIVE files will be installed.<br />
<br />
If the above sample project is to be packaged into separate run time and development components, we must assign the appropriate component to each target file installed. The executable, shared library, and plugin are required in order to run the application, so they belong in a Runtime component. Meanwhile, the import library (corresponding to the shared library on Windows) and the static library are only required to develop extensions to the application, and therefore belong in a Development component.<br />
<br />
Component assignments may be specified by adding the COMPONENT argument to each of the commands above. You may also combine all of the installation rules into a single command invocation, which is equivalent to all of the above commands with components added. The files generated by each target are installed using the rule for their category.<br />
<br />
<syntaxhighlight lang="text"><br />
install (TARGETS myExecutable mySharedLib myStaticLib myPlugin<br />
RUNTIME DESTINATION bin COMPONENT Runtime<br />
LIBRARY DESTINATION lib COMPONENT Runtime<br />
ARCHIVE DESTINATION lib/myproject COMPONENT Development)<br />
</syntaxhighlight><br />
<br />
Either NAMELINK_ONLY or NAMELINK_SKIP may be specified as a LIBRARY option. On some platforms, a versioned shared library has a symbolic link such as<br />
<br />
<syntaxhighlight lang="text"><br />
lib<name>.so -> lib<name>.so.1<br />
</syntaxhighlight><br />
<br />
where lib<name>.so.1 is the soname of the library, and lib<name>.so is a "namelink" that helps linkers to find the library when given -l<name>. The NAMELINK_ONLY option results in installation of only the namelink when a library target is installed. The NAMELINK_SKIP option causes installation of library files other than the namelink when a library target is installed. When neither option is given, both portions are installed. On platforms where versioned shared libraries do not have namelinks, or when a library is not versioned, the NAMELINK_SKIP option installs the library and the NAMELINK_ONLY option install s nothing. See the VERSION and SOVERSION target properties for details on creating versioned, shared libraries.<br />
<br />
Projects may install files other than those that are created with add_executable or add_library, such as header files or documentation. General-purpose installation of files is specified using the FILES signature:<br />
<br />
<syntaxhighlight lang="text"><br />
install (FILES files... DESTINATION <dir><br />
[PERMISSIONS permissions...]<br />
[CONFIGURATIONS [Debug|Release|...]]<br />
[COMPONENT <component>]<br />
[RENAME <name>] [OPTIONAL])<br />
</syntaxhighlight><br />
<br />
The FILES keyword is immediately followed by a list of files to be installed. Relative paths are evaluated with respect to the current source directory. Files will be installed to the given DESTINATION directory. For example, the command<br />
<br />
<syntaxhighlight lang="text"><br />
install (FILES my-api.h ${CMAKE_CURRENT_BINARY_DIR}/my-config.h<br />
DESTINATION include)<br />
</syntaxhighlight><br />
<br />
installs the file my-api.h from the source tree, and the file my-config.h from the build tree into the include directory under the installation prefix. By default installed files are given the permissions OWNER_WRITE, OWNER_READ, GROUP_READ, and WORLD_READ, but this may be overridden by specifying the PERMISSIONS option. Consider cases in which users would want to install a global configuration file on a UNIX system that is readable only by its owner (such as root). We accomplish this with the command<br />
<br />
<syntaxhighlight lang="text"><br />
install (FILES my-rc DESTINATION /etc<br />
PERMISSIONS OWNER_WRITE OWNER_READ)<br />
</syntaxhighlight><br />
<br />
which installs the file my-rc with owner read/write permission into the absolute path /etc.<br />
<br />
The RENAME argument specifies a name for an installed file that may be different from the original file. Renaming is allowed only when a single file is installed by the command. For example, the command<br />
<br />
<syntaxhighlight lang="text"><br />
install (FILES version.h DESTINATION include RENAME my-version.h)<br />
</syntaxhighlight><br />
<br />
will install the file version.h from the source directory to include/my-version.h under the instal lation prefix.<br />
<br />
Projects may also install helper programs, such as shell scripts or Python scripts that are not actually compiled as targets. These may be installed with the FILES signature using the PERMISSIONS option to add execute permission. However, this case is common enough to justify a simpler interface. CMake provides the PROGRAMS signature for this purpose:<br />
<br />
<syntaxhighlight lang="text"><br />
install (PROGRAMS files... DESTINATION <dir><br />
[PERMISSIONS permissions... ]<br />
[CONFIGURATIONS [Debug|Release|...]]<br />
[COMPONENT <component>]<br />
[RENAME <name>] [OPTIONAL])<br />
</syntaxhighlight><br />
<br />
The PROGRAMS keyword is immediately followed by a list of scripts to be installed. This command is identical to the FILES signature, except that the default permissions additionally include OWNER_EXECUTE, GROUP_EXECUTE, and WORLD_EXECUTE. For example, we may install a Python utility script with the command<br />
<br />
<syntaxhighlight lang="text"><br />
install (PROGRAMS my-util.py DESTINATION bin)<br />
</syntaxhighlight><br />
<br />
which installs my-util.py to the bin directory under the installation prefix and gives it owner, group, world read and execute permissions, plus owner write.<br />
<br />
Projects may also provide an entire directory full of resource files, such as icons or html documentation. An entire directory may be installed using the DIRECTORY signature:<br />
<br />
<syntaxhighlight lang="text"><br />
install (DIRECTORY dirs... DESTINATION <dir><br />
[FILE _PERMISSIONS permissions... ]<br />
[DIRECTORY_PERMISSIONS permissions...]<br />
[USE_SOURCE_PERMISSIONS]<br />
[CONFIGURATIONS [Debug|Release|...]]<br />
[COMPONENT <component>]<br />
[[PATTERN <pattern> | REGEX <regex>]<br />
[EXCLUDE] [PERMISSIONS permissions...]] [...])<br />
</syntaxhighlight><br />
<br />
The DIRECTORY keyword is immediately followed by a list of directories to be installed. Relative paths are evaluated with respect to the current source directory. Each named directory is installed to the destination directory. The last component of each input directory name is appended to the destination directory as that directory is copied. For example, the command<br />
<br />
<syntaxhighlight lang="text"><br />
install (DIRECTORY data/icons DESTINATION share/myproject)<br />
</syntaxhighlight><br />
<br />
will install the data/icons directory from the source tree into share/myproject/icons under the installation prefix. A trailing slash will leave the last component empty and install the contents of the input directory to the destination. The command<br />
<br />
<syntaxhighlight lang="text"><br />
install (DIRECTORY doc/html/ DESTINATION doc/myproject)<br />
</syntaxhighlight><br />
<br />
installs the contents of doc/html from the source directory into doc/myproject under the installation prefix. If no input directory names are given, as in<br />
<br />
<syntaxhighlight lang="text"><br />
install (DIRECTORY DESTINATION share/myproject/user)<br />
</syntaxhighlight><br />
<br />
the destination directory will be created but nothing will be installed into it.<br />
<br />
Files installed by the DIRECTORY signature are given the same default permissions as the FILES signature. Directories installed by the DIRECTORY signature are given the same default permissions as the PROGRAMS signature. The FILE_PERMISSIONS and DIRECTORY_PERMISSIONS options may be used to override these defaults. Consider a case in which a directory full of example shell scripts is to be installed into a directory that is both owner and group writable. We may use the command<br />
<br />
<syntaxhighlight lang="text"><br />
install (DIRECTORY data/scripts DESTINATION share/myproject<br />
FILE PERMISSIONS<br />
OWNER READ OWNER_EXECUTE OWNER WRITE<br />
GROUP_READ GROUP_EXECUTE<br />
WORLD_READ WORLD_EXECUTE<br />
DIRECTORY_PERMISSIONS<br />
OWNER READ OWNER_EXECUTE OWNER_ WRITE<br />
GROUP_READ GROUP_EXECUTE GROUP_WRITE<br />
WORLD_READ WORLD_EXECUTE)<br />
</syntaxhighlight><br />
<br />
which installs the directory data/scripts into share/myproject/scripts and sets the desired permissions. In some cases, a fully-prepared input directory created by the project may have the desired permissions already set. The USE_SOURCE_PERMISSIONS option tells CMake to use the file and directory permissions from the input directory during installation. If in the previous example the input directory were to have already been prepared with correct permissions, the following command may have been used instead:<br />
<br />
<syntaxhighlight lang="text"><br />
install (DIRECTORY data/scripts DESTINATION share/myproject<br />
USE_SOURCE_PERMISSIONS)<br />
</syntaxhighlight><br />
<br />
If the input directory to be installed is under source management, such as CVS, there may be extra subdirectories in the input that you do not wish to install. There may also be specific files that should not be installed or<br />
be installed with different permissions, while most files get the defaults. The PATTERN and REGEX options may be used for this purpose. A PATTERN option is followed first by a globbing pattern and then by an EXCLUDE or PERMISSIONS option. A REGEX option is followed first by a regular expression and then by EXCLUDE or PERMISSIONS. The EXCLUDE option skips installation of those files or directories matching the preceding pattern or expression, while the PERMISSIONS option assigns specific permissions to them.<br />
<br />
Each input file and directory is tested against the pattern or regular expression as a full path with forward slashes. A pattern will match only complete file or directory names occurring at the end of the full path, while a regular expression may match any portion. For example, the pattern foo* will match .../foo.txt but not .../myfoo.txt or .../foo/bar.txt; however, the regular expression foo will match all of them.<br />
<br />
Returning to the above example of installing an icons directory, consider the case in which the input directory is managed by CVS and also contains some extra text files that we do not want to install. The command<br />
<br />
<syntaxhighlight lang="text"><br />
install (DIRECTORY data/icons DESTINATION share/myproject<br />
PATTERN *CVS" EXCLUDE<br />
PATTERN "*.txt" EXCLUDE)<br />
</syntaxhighlight><br />
<br />
installs the icons directory while ignoring any CVS directory or text file contained. The equivalent command using the REGEX option is<br />
<br />
<syntaxhighlight lang="text"><br />
install (DIRECTORY data/icons DESTINATION share/myproject<br />
REGEX "/CVS$" EXCLUDE<br />
REGEX "/[^/]*.txt$" EXCLUDE)<br />
</syntaxhighlight><br />
<br />
which uses '/' and '$' to constrain the match in the same way as the patterns. Consider a similar case in which the input directory contains shell scripts and text files that we wish to install with different permissions than the other files. The command<br />
<br />
<syntaxhighlight lang="text"><br />
install (DIRECTORY data/other/ DESTINATION share/myproject<br />
PATTERN "CVS" EXCLUDE<br />
PATTERN "*.txt"<br />
PERMISSIONS OWNER_READ OWNER_WRITE<br />
PATTERN "*.sh"<br />
PERMISSIONS OWNER READ OWNER_WRITE OWNER_EXECUTE)<br />
</syntaxhighlight><br />
<br />
will install the contents of data/other from the source directory to share/myproject while ignoring CVS directories and giving specific permissions to .txt and .sh files.<br />
<br />
Project installations may need to perform tasks other than just placing files in the installation tree. Third-party packages may provide their own mechanisms for registering new plugins that must be invoked during project installation. The SCRIPT signature is provided for this purpose:<br />
<br />
<syntaxhighlight lang="text"><br />
install (SCRIPT <file>)<br />
</syntaxhighlight><br />
<br />
The SCRIPT keyword is immediately followed by the name of a CMake script. CMake will execute the script during installation. If the file name given is a relative path, it will be eval uated with respect to the current source directory. A simple use case is printing a message during installation. We first write a message. cmake file containing the code<br />
<br />
<syntaxhighlight lang="text"><br />
message ("Installing My Project")<br />
</syntaxhighlight><br />
<br />
and then reference this script using the command:<br />
<br />
<syntaxhighlight lang="text"><br />
install (SCRIPT message.cmake)<br />
</syntaxhighlight><br />
<br />
Custom installation scripts are not executed during the main CMakeLists file processing; they are executed during the installation process itself. Variables and macros defined in the code containing the install(SCRIPT) call will not be accessible from the script. However, there are a few variables defined during the script execution that may be used to get information about the installation. The variable CMAKE_INSTALL_PREFIX (page 645) is set to the actual installation prefix. This may be different from the corresponding cache variable value, because the installation scripts may be executed by a packaging tool that uses a different prefix. An environment variable ENV{DESTDIR} may be set by the user or packaging tool. Its value is prepended to the installation prefix and to absolute installation paths to determine the location where files are installed. In order to reference an install location on disk, custom script may use $ENV{DESTDIR} ${CMAKE_INSTALL_PREFIX} as the top portion of the path. The variable CMAKE_INSTALL_CONFIG_NAME is set to the name of the build configuration currently being installed (Debug, Release, etc.). During component-specific installation, the variable<br />
CMAKE_INSTALL_COMPONENT is set to the name of the current component.<br />
<br />
Custom installation scripts, as simple as the message above, are more easily created with the script code placed inline in the call to the INSTALL command. The CODE signature is provided for this purpose:<br />
<br />
<syntaxhighlight lang="text"><br />
install (CODE "<code>")<br />
</syntaxhighlight><br />
<br />
The CODE keyword is immediately followed by a string containing the code to place in the installation script. An install-time message may be created using the command<br />
<br />
<syntaxhighlight lang="text"><br />
install (CODE "MESSAGE(\"Installing My Project\")")<br />
</syntaxhighlight><br />
<br />
which has the same effect as the message. cmake script but contains the code inline.<br />
<br />
<br />
====Installing Prerequisite Shared Libraries====<br />
<br />
Executables are frequently built using shared libraries as building blocks. When you install such an executable, you must also install its prerequisite shared libraries, called "prerequisites" because the executable requires their presence in order to load and run properly. The three main sources of shared libraries are the operating system itself, the build products of your own project, and third party libraries belonging to an external project. The ones from the operating system may be relied upon to be present without installing anything: they are on the base platform where your executable runs. The build products in your own project presumably have add_library build rules in the CMakeLists files, and so it should be straightforward to create CMake install rules for them. It is the third party libraries that frequently become a high maintenance item when there are more than a handful of them, or when the set of them fluctuates from version-to-version of the third party project. Libraries may be added, code may be reorganized, and the third party shared libraries themselves may actually have additional prerequisites that are not obvious at first glance.<br />
<br />
CMake provides two modules to make it easier to deal with required shared libraries. The first module, GetPrerequisites.cmake, provides the get_prerequisites function to analyze and classify the prerequisite shared libraries upon which an executable depends. Given an executable file as input, it will produce a list of the shared libraries required to run that executable, including any prerequisites of the discovered shared libraries themselves. It uses native tools on the various underlying platforms to perform this analysis: dumpbin (Windows), otool (Mac), and ldd (Linux). The second module, BundleUtilities.cmake, provides the fixup_bundle function to copy and fix prerequisite shared libraries using well-defined locations relative to the executable. For Mac bundle applications, it embeds the libraries inside the bundle, fixing them with install_name_tool to make a self-contained unit. On Windows, it copies the libraries into the same directory with the executable since executables will search in their own directories for their required DLLs.<br />
<br />
The fixup_bundle function helps you create relocatable install trees. Mac users appreciate self-contained bundle applications: you can drag them anywhere, double click them, and they still work. They do not rely on anything being installed in a certain location other than the operating system itself. Similarly, Windows users without administrative privileges appreciate a relocatable install tree where an executable and all required DLLs are installed in the same directory, so that it works no matter where you install it. You can even move things around after installing them and it will still work.<br />
<br />
To use fixup_bundle, first install one of your executable targets. Then, configure a CMake script that can be called at install time. Inside the configured CMake script, simply include BundleUtilities and call the fixup_bundle function with appropriate arguments.<br />
<br />
In CMakeLists.txt<br />
<br />
<syntaxhighlight lang="text"><br />
install (TARGETS myExecutable DESTINATION bin)<br />
<br />
# To install, for example, MSVC runtime libraries:<br />
include (InstallRequiredSystemLibraries)<br />
<br />
# To install other/non-system 3rd party required libraries:<br />
configure_file (<br />
${CMAKE_CURRENT_SOURCE_DIR}/FixBundle.cmake.in<br />
${CMAKE_CURRENT_BINARY_DIR}/FixBundle.cmake<br />
@ONLY<br />
)<br />
<br />
install (SCRIPT ${CMAKE_CURRENT_BINARY_DIR}/FixBundle.cmake)<br />
</syntaxhighlight><br />
<br />
In FixBundle.cmake.in:<br />
<br />
<syntaxhighlight lang="text"><br />
include (BundleUtilities)<br />
<br />
# Set bundle to the full path name of the executable already<br />
# existing in the install tree:<br />
set (bundle<br />
"${CMAKE_INSTALL_PREFIX}/myExecutable@CMAKE_EXECUTABLE_SUFFIX@")<br />
<br />
# Set other_libs to a list of full path names to additional<br />
# libraries that cannot be reached by dependency analysis.<br />
# (Dynamically loaded PlugIns, for example.)<br />
set (other_libs "")<br />
<br />
# Set dirs to a list of directories where prerequisite libraries<br />
# may be found:<br />
set (dirs<br />
"@CMAKE_RUNTIME_OUTPUT_DIRECTORY@"<br />
"@CMAKE_LIBRARY_OUTPUT_DIRECTORY@"<br />
)<br />
<br />
fixup_bundle ("${bundle}" "${other_libs}" "${dirs}")<br />
</syntaxhighlight><br />
<br />
You are responsible for verifying that you have permission to copy and distribute the prerequisite shared libraries for your executable. Some libraries may have restrictive software licenses that prohibit making copies a la fixup_bundle.<br />
<br />
<br />
====Exporting and Importing Targets====<br />
<br />
CMake 2.6 introduced support for exporting targets from one CMake-based project and importing them into another. The main feature allowing this functionality is the notion of an IMPORTED target. Here we present imported targets and then show how CMake files may be generated by a project to export its targets for use by other projects.<br />
<br />
<br />
====Importing Targets====<br />
<br />
Imported targets are used to convert files outside of the project on disk into logical targets inside a CMake project. They are created using the IMPORTED (page 590) option to the add_executable() (page 273) and add_library() (page 274) commands. No build files are generated for imported targets. They are used simply for convenient, flexible reference to outside executables and libraries. Consider the following example which creates and uses an IMPORTED executable target<br />
<br />
<syntaxhighlight lang="text"><br />
add_executable (generator IMPORTED) # 1<br />
set_property (TARGET generator PROPERTY<br />
IMPORTED_LOCATION "/path/to/some_generator") # 2<br />
<br />
add_custom_command (OUTPUT generated.c<br />
COMMAND generator generated.c) # 3<br />
<br />
add_executable (myexe src1.c src2.c generated.c)<br />
</syntaxhighlight><br />
<br />
Line # 1 creates a new CMake target called generator. Line #2 tells CMake the location of the target on disk to import. Line #3 references the target in a custom command. Once CMake is run, the generated build system will contain a command line such as<br />
<br />
<syntaxhighlight lang="text"><br />
/path/to/some_generator /project/binary/dir/generated.c<br />
</syntaxhighlight><br />
<br />
in the rule to generate the source file. In a similar manner, libraries from other projects may be used through IMPORTED targets<br />
<br />
<syntaxhighlight lang="text"><br />
add_library (foo IMPORTED)<br />
set_property (TARGET foo PROPERTY<br />
IMPORTED_LOCATION "/path/to/libfoo.a")<br />
add_executable (myexe src1.c src2.c)<br />
target_link_libraries (myexe foo)<br />
</syntaxhighlight><br />
<br />
On Windows, a .dll and its .lib import library may be imported together:<br />
<br />
<syntaxhighlight lang="text"><br />
add_library (bar IMPORTED)<br />
set_property (TARGET bar PROPERTY<br />
IMPORTED_LOCATION "c:/path/to/bar.dll")<br />
set_property (TARGET bar PROPERTY<br />
IMPORTED_IMPLIB "c:/path/to/bar.lib")<br />
add_executable (myexe src1.c src2.c)<br />
target_link_libraries (myexe bar)<br />
</syntaxhighlight><br />
<br />
A l ibrary with multiple configurations may be imported with a single target:<br />
<br />
<syntaxhighlight lang="text"><br />
add_library (foo IMPORTED)<br />
set_property (TARGET foo PROPERTY<br />
IMPORTED_LOCATION_RELEASE "c:/path/to/foo.lib")<br />
set_property (TARGET foo PROPERTY<br />
IMPORTED_LOCATION_DEBUG "c:/path/to/foo_d.lib")<br />
add_executable (myexe src1.c src2.c)<br />
target_link_libraries (myexe foo)<br />
</syntaxhighlight><br />
<br />
The generated build system will link myexe to foo.lib when it is built in the release configuration, and foo_d.lib when built in the debug configuration.<br />
<br />
<br />
====Exporting Targets====<br />
<br />
Imported targets on their own are useful, but they still require the project that imports them to know the locations of the target files on disk. The real power of imported targets is when the project providing the target files also provides a file to help import them.<br />
<br />
The install(TARGETS) and install(EXPORT) commands work together to install both a target and a CMake file to help import it. For example, the code<br />
<br />
<syntaxhighlight lang="text"><br />
add_executable (generator generator.c)<br />
install (TARGETS generator DESTINATION lib/myproj/generators<br />
EXPORT myproj-targets)<br />
install (EXPORT myproj-targets DESTINATION lib/myproj)<br />
</syntaxhighlight><br />
<br />
will install the two files<br />
<br />
<syntaxhighlight lang="text"><br />
<prefix>/lib/myproj/generators/generator<br />
<prefix>/lib/myproj/myproj-targets.cmake<br />
</syntaxhighlight><br />
<br />
The first is the regular executable named generator. The second file, myproj-targets. cmake, is a CMake file designed to make it easy to import generator. This file contains code such as<br />
<br />
<syntaxhighlight lang="text"><br />
get_filename_component (_self "${CMAKE_CURRENT_LIST_FILE}" PATH)<br />
get_filename_component (PREFIX "${_self}/../.." ABSOLUTE)<br />
add_executable (generator IMPORTED)<br />
set_property (TARGET generator PROPERTY<br />
IMPORTED_LOCATION "${PREFIX}/lib/myproj/generators/generator")<br />
</syntaxhighlight><br />
<br />
(note that ${PREFIX} is computed relative to the file location). An outside project may now use generator as follows<br />
<br />
<syntaxhighlight lang="text"><br />
include (${PREFIX}/lib/myproj/myproj-targets.cmake) # 1<br />
add_custom_command (OUTPUT generated.c<br />
COMMAND generator generated.c) # 2<br />
add_executable (myexe src1.c src2.c generated.c)<br />
</syntaxhighlight><br />
<br />
Line #1 loads the target import script (see section 0 to make this automatic). The script may import any number of targets. Their locations are computed relative to the script location so the install tree may be easily moved. Line #2 references the generator executable in a custom command. The resulting build system will run the executable from its installed location. Libraries may also be exported and imported<br />
<br />
<syntaxhighlight lang="text"><br />
add_library (foo STATIC foo1.c)<br />
install (TARGETS foo DESTINATION lib EXPORTS myproj-targets)<br />
install (EXPORT myproj-targets DESTINATION lib/myproj)<br />
</syntaxhighlight><br />
<br />
This installs the library and an import file referencing it. Outside projects may simply write<br />
<br />
<syntaxhighlight lang="text"><br />
include (${PREFIX}/lib/myproj/myproj-targets.cmake)<br />
add_executable (myexe src1.c)<br />
target_link_libraries (myexe foo)<br />
</syntaxhighlight><br />
<br />
and the executable will be linked to the library foo, exported, and installed by the original project.<br />
<br />
Any number of target installations may be associated with the same export name. Export names are considered global so any directory may contribute a target installation. Only the one for calling to the install(EXPORT) command is needed to install an import file that references all targets. Both of the examples above may be combined into a single export file, even if they are in different subdirectories of the project, as shown in the code below.<br />
<br />
<syntaxhighlight lang="text"><br />
# A/CMakeLists.txt<br />
add_executable (generator generator.c)<br />
install (TARGETS generator DESTINATION lib/myproj/generators<br />
EXPORT myproj-targets)<br />
<br />
# B/CMakeLists.txt<br />
add_library (foo STATIC foo1.c)<br />
install (TARGETS foo DESTINATION lib EXPORTS myproj-targets)<br />
<br />
# Top CMakeLists.txt<br />
add_subdirectory (A)<br />
add_subdirectory (B)<br />
install (EXPORT myproj-targets DESTINATION lib/myproj)<br />
</syntaxhighlight><br />
<br />
Typically projects are built and installed before being used by an outside project. However, in some cases, it is desirable to export targets directly from a build tree. The targets may then be used by an outside project that references the build tree with no installation involved. The export() (page 287) command is used to generate a file exporting targets from a project build tree. For example, the code<br />
<br />
<syntaxhighlight lang="text"><br />
add_executable (generator generator.c)<br />
export (TARGETS generator FILE myproj-exports.cmake)<br />
</syntaxhighlight><br />
<br />
will create a file in the project build tree called myproj-exports.cmake, which contains the required code to import the target. This file may be loaded by an outside project that is aware of the project build tree, in order to use the executable to generate a source file. An example application of this feature is for building a generator executable on a host platform when cross-compiling. The project containing the generator executable may be built on the host platform and then the project that is being cross-compiled for another platform may load it.<br />
<br />
<br />
===Advanced Commands===<br />
<br />
There are a few commands that can be very useful, but are not typically used in writing CMakeLists files. This section will discuss a few of these commands and when they are useful. First, consider the add_dependencies() (page 273) command which creates a dependency between two targets. CMake automatically creates dependencies between targets when it can determine them. For example, CMake will automatically create a dependency for an executable target that depends on a library target. The add_dependencies command is typically used to specify inter target dependencies between targets where at least one of the targets is a custom target (see Chapter 6 for more information on custom targets).<br />
<br />
The include_regular_expression() (page 316) command also relates to dependencies. This command controls the regular expression that is used for tracing source code dependencies. By default, CMake will trace all the dependencies for a source file including system files such as stdio.h. If you specify a regular expression with the include_regular_expression command, that regular expression will be used to limit which include files are processed. For example; if your software project's include files all started with the prefix foo (e.g. fooMain.c fooStruct.h, etc), you could specify a regular expression of ^foo.*$ to limit the dependency checking to just the files of your project.<br />
<br />
Occasionally you might want to get a listing of all the source files that another source file depends on. This is useful when you have a program that uses pieces of a large library, but are unsure which pieces it is using. The output_required_files() (page 349) command will take a source file and produce a list of all the other source files it depends on. You could then use this list to produce a reduced version of the library that only contains the necessary files for your program.<br />
<br />
Some tools, such as Rational Purify on the Sun platform, are run by inserting an extra command before the final link step. So, instead of<br />
<br />
<syntaxhighlight lang="text"><br />
CC foo.o -o foo<br />
</syntaxhighlight><br />
<br />
The link step would be<br />
<br />
<syntaxhighlight lang="text"><br />
purify CC foo.o -o foo<br />
</syntaxhighlight><br />
<br />
It is possible to do this with CMake. To run an extra program in front of the link line, change the rule variables CMAKE_CXX_LINK_EXECUTABLE and CMAKE_C_LINK_EXECUTABLE. Rule variables are described in Chapter 11. The values for these variables are contained in the file Modules/CMakeDefaultMakeRuleVariables. cmake, and they are sometimes redefined in Modules/Platform/*.cmake. Make sure it is set after the PROJECT() (page 327) command in the CMakeLists file. Here is a small example of using purify to link a program called foo<br />
<br />
<syntaxhighlight lang="text"><br />
project (foo)<br />
<br />
set (CMAKE_CXX_LINK_EXECUTABLE<br />
"purify $(CMAKE_CXX_LINK_EXECUTABLE)")<br />
add_executable (foo foo.cxx)<br />
</syntaxhighlight><br />
<br />
Of course, for a generic CMakeLists file, you should have some if checks for the correct platform. This will only work for the Makefile generators because the rule variables are not used by the IDE generators. Another option would be to use $(PURIFY) instead of plain purify. This would pass through CMake into the Makefile and be a make variable. The variable could be defined on the command line like this: make PURIFY=purify. If not specified then it would just use the regular rule for linking a C++ executable as PURIFY would be expanded by make to nothing.<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:MastringCmakeVersion31]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=MastringCmakeVersion31:Chapter_03&diff=5601MastringCmakeVersion31:Chapter 032020-09-21T11:58:54Z<p>Onionmixer: CMAKE Chapter 3</p>
<hr />
<div>==CHAPTER THREE : KEY CONCEPTS==<br />
<br />
===Main Structures===<br />
<br />
Thi s chapter provides an introduction to CMake's key concepts. As you start working with CMake, you will run into a variety of concepts such as targets, generators, and commands. In CMake, these concepts are implemented as C++ classes and are referenced in many of CMake's commands. Understanding these concepts will provide you with the working knowledge you need to create effective CMakeLists files.<br />
<br />
Before going into detail about CMake's classes, it is worth understanding their basic relationships. At the lowest level are source files; these correspond to typical C or C++ source code fi les. Source files are combined into targets. A target is typically an executable or library. A directory represents a directory in the source tree and typically has a CMakeLists file and one-or-more targets associated with it. Every directory has a local generator that is responsible for generating the Makefiles or project files for that directory. All of the local generators share a common global generator that oversees the build process. Finally, the global generator is created and driven by the cmake class itself.<br />
<br />
Figure 3.1 shows the basic class structure of CMake. We will now consider C Make's concepts in a bit more detail. CMake's execution begins by creating an instance of the cmake class and passing command line arguments to it. This class manages the overall configuration process and holds information that is global to the build process, such as the cache values. One of the first things the cmake class does is to create the correct global generator based on the user's selection of which generator to use (such as Visual Studio 10, Borland Makefiles, or UNIX Makefiles). At this point, the cmake class passes control to the global generator it created by invoking the configure and generate methods.<br />
<br />
<br />
<<Figure 3.1 : CMake Internals>><br />
<br />
<br />
The global generator is responsible for managing the configuration and generation of all of the Makefiles (or project files) for a project. In practice, most of the work is actually done by local generators that are created by the global generator. One local generator is created for each directory of the project that is processed. So while a project will have only one global generator, it may have many local generators. For example, under Visual Studio 7, the global generator creates a solution file for the entire project while the local generators create a project file for each target in their directory.<br />
<br />
In the case of the "Unix Makefiles" generator, the local generators create most of the Makefiles and the global generator simply orchestrates the process and creates the main top-level Makefile. Implementation details vary widely among generators. Visual Studio 6 generators make use of .dsp and .dsw file templates and perform variable replacements on them. The generators for Visual Studio 7 and later directly generate the XML output without using any file templates. The Makefile generators incl uding UNIX, NMake, Borland, etc. use a set of rule templates and replacements to generate their Makefiles.<br />
<br />
<<Figure 3.2: Sample Directory Tree>><br />
<br />
Each local generator has an instance of the class cmMakefile, which is where the results of parsing the CMakeLists files are stored. For each directory in a project there will be a single cmMakefile instance, which is why the cmMakefile class is often referred to as the directory. This is clearer for build systems that do not use Makefiles. That instance will hold all of the information from parsing that directory 's CMakeLists file (see Figure 3.1). One way to think of the cmMakefile class is as a structure that starts out initialized with a few variables from its parent directory, and is then filled in as the CMakeLists file is processed. Reading in the CMakeLists file is simply a matter of CMake executing the commands it finds in the order it encounters them.<br />
<br />
Each command in CMake is implemented as a separate C++ class, and has two main parts. The first part of a command is the InitialPass method, which receives the arguments and the cmMakefile instance for the directory currently being processed and performs its operations. The set command processes its arguments and if the arguments are correct, it calls a method on the cmMakefile to set the variable. The results of the command are always stored in the cmMakefile instance; information is never stored in a command. The last part of a command is the FinalPass. The FinalPass of a command is executed after all commands (for the entire CMake project) have had their InitialPass invoked. Most commands do not have a FinalPass, but in some rare cases a command must do something with global information that may not be available during the initial pass.<br />
<br />
Once all of the CMakeLists files have been processed, the generators use the information collected into the<br />
cmMakefile instances to produce the appropriate files for the target build system (such as Makefiles).<br />
<br />
<br />
<br />
===Targets===<br />
<br />
Now that we have discussed the overall process of CMake, let us consider some of the key items stored in the cmMakefile instance. Probably the most important item is targets. Targets represent executables, libraries, and utilities built by CMake. Every add_library() (page 274), add_executable() (page 273), and ADD_CUSTOM_TARGET() (page 272) command creates a target. For example, the following command will create a target named "foo" that is a static library, with foo1.c and foo2.c as source files.<br />
<br />
<syntaxhighlight lang="text"><br />
add_library (foo STATIC foo1.c foo2.c)<br />
</syntaxhighlight><br />
<br />
The name "foo" is now avai lable for use as a library name everywhere else in the project, and CMake will know how to expand the name into the library when needed. Libraries can be declared as a particular type such as STATIC, SHARED, MODULE, or left undeclared. STATIC indicates that the library must be built as a static library. Likewise, SHARED indicates it must be built as a shared library. MODULE indicates that the library must be created so that it can be dynamically-loaded into an executable. Module libraries are implemented as shared libraries on many platforms, but not all. Therefore, CMake does not allow other targets to link to modules. If none of these options are specified, it indicates that the library could be built as either shared or static. In that case, CMake uses the setting of the variable BUILD_SHARED_LIBS to determine if the library should be SHARED or STATIC. If it is not set, then CMake defaults to building static libraries.<br />
<br />
Likewise, executables have some options. By default, an executable will be a traditional console application that has a main entry point. One may specify a WIN32 option to request a WinMain entry point on Windows systems, while retaining main on non-Windows systems.<br />
<br />
In addition to storing their type, targets also keep track of general properties. These properties can be set and retrieved using the set_target_properties() (page 332) and get_target_property() (page 312) commands, or the more general set_property() (page 329) and get_property() (page 311) commands. One useful property is LINK_FLAGS (page 598), which i s used to specify additional link flags for a specific target. Targets store a list of libraries that they link against, which are set using the target_link_libraries() (page 340) command. Names passed into this command can be libraries, full paths to libraries, or the name of a library from an add_library() (page 274) command. Targets also store the link directories to use when linking, and custom commands to execute after building.<br />
<br />
For each library or executable CMake creates, it tracks of all the libraries on which that target depends. Since static libraries do not actually link to the libraries on which they depend, it is important for CMake to keep track of their dependencies so they can be specified when other targets link to the static library. For example:<br />
<br />
<syntaxhighlight lang="text"><br />
add_library (foo foo.cxx)<br />
target_link_libraries (foo bar)<br />
<br />
add_executable (foobar foobar.cxx)<br />
target_link_linraries (foobar foo)<br />
</syntaxhighlight><br />
<br />
will link the libraries "foo" and "bar" into the executable "foobar" even though only "foo" was explicitly specified for it. This is required when linking to static libraries. Since the foo library uses symbols from the bar library, foobar will most likely also need bar since it uses foo.<br />
<br />
In some cases, such as when using external libraries, or when reducing the overlinking when creating dynamic libraries you want only a subset of your link dependencies to be propagated to targets that link to you. These advanced use cases are covered in the AdvancedLinking section of the Linking Chapter.<br />
<br />
<br />
===Source Files===<br />
<br />
The source file structure i s in many ways similar to a target. It stores the filename, extension, and a number of general properties related to a source file. Like targets, you can set and get properties using set_source_files_properties and get_source_file_property, or the more generic versions. Available properties include:<br />
<br />
''COMPILE_FLAGS''Compile flags specific to this source file. These can include source specific -D and -I flags<br />
<br />
''GENERATED'' The GENERATED property indicates that the source file is generated as part of the build process. It tells CMake not to complain if the source file does not exist prior to building. This is set automatically by add_custom_command for its output.<br />
<br />
''OBJECT_DEPENDS'' Adds additional files on which this source file should depend. CMake automatically performs dependency analysis to determine the usual C, C++, and Fortran dependencies. This parameter is used rarely in cases where there is an unconventional dependency or if the source files do not exist at dependency analysis time.<br />
<br />
<br />
===Directories, Generators, Tests, and Properties===<br />
<br />
In addition to targets and source files, you may find yourself occasionally working with other classes such as directories, generators, and tests. Normally such interactions take the shape of setting or getting properties from these objects. All of these classes have properties associated with them, as do source files and targets. A property is a key-value pair attached to a specific object such as a target. The most generic way to access properties is through the set_property() (page 329) and get_property() (page 311) commands. These commands allow you to set or get a property from any class in CMake that has properties. Some of the properties for targets and source files have already been covered. Some useful properties for a directory include:<br />
<br />
''ADDITIONAL_MAKE_CLEAN_FILES'' This property specifies a list of additional files that will be cleaned as a part of the "make clean" stage. CMake will clean up any generated files that it knows about by default, but your build process may use other tools that leave files behind. This property can be set to a list of those files so that they also will be properly cleaned up.<br />
<br />
''EXCLUDE_FROM_ALL'' Thi s property indicates if all the targets in this directory and all sub-directories should be excluded from the default build target. If it is not, then with a Makefile, for example, typing make will cause these targets to be built as well. The same concept applies to the default build of other generators.<br />
<br />
''LISTFILE_STACK'' This property is mainly useful when trying to debug errors in your CMake scripts. It returns a list of which list files are currently being processed, in order. So if one CMakeLists file does an include command, it is effectively pushing the included CMakeLists file onto the stack.<br />
<br />
<br />
A full list of properties supported in CMake can be obtained by running cmake with the --help-property-list option. The generators and directories are automatically created for you as CMake processes your source tree.<br />
<br />
<br />
===Variables and Cache Entries===<br />
<br />
CMakeLists files use variables much like any programming language. As discussed in Chapter 2 variables hold string values for later use. A number of useful variables are automatically defined by CMake and are discussed in the cmake-variables(7) (page 623) manual.<br />
<br />
Variables in CMake are referenced using a ${VARIABLE} notation, and are defined in the order of the execution of set commands. Consider the following example:<br />
<br />
<syntaxhighlight lang="text"><br />
# FOO is undefined<br />
<br />
set (FOO 1)<br />
# FOO is now set to 1<br />
<br />
set (FOO 0)<br />
# FOO is now set to 0<br />
</syntaxhighlight><br />
<br />
This may seem straightforward, but consider the following example:<br />
<br />
<syntaxhighlight lang="text"><br />
set (FOO 1)<br />
<br />
if (${FOO} LESS 2)<br />
set (FOO 2)<br />
else (${FOO} LESS 2)<br />
set (FOO 3)<br />
endif (${FOO} LESS 2)<br />
</syntaxhighlight><br />
<br />
Clearly the if statement is true, which means that the body of the if statement will be executed. That will set the variable FOO to 2, and so when the else statement is encountered FOO will have a value of 2. Normally in CMake the new value of FOO would be used, but the else statement is a rare exception to the rule and always refers back to the value of the variable when the if statement was executed. In this case, the body of the else clause will not be executed. To further understand the scope of variables, consider this example:<br />
<br />
<syntaxhighlight lang="text"><br />
set (foo 1)<br />
<br />
# process the dir1 subdirectory<br />
add_subdirectory (dir1)<br />
<br />
# include and process the commands in file1.cmake<br />
include (file1.cmake)<br />
<br />
set (bar2)<br />
<br />
# process the dir2 subdirectory<br />
add_subdirectory (dir2)<br />
<br />
# include and process the commands in file2.cmake<br />
include (file2.cmake)<br />
</syntaxhighlight><br />
<br />
In this example, because the variable foo is defined at the beginning, it will be defined while processing both dir1 and dir2. In contrast, bar will only be defined when processing dir2. Likewise, foo will be defined when processing both file1.cmake and file2.cmake, whereas bar will only be defined while processing file2.cmake.<br />
<br />
Variables in CMake have a scope that is a little different from most languages. When you set a variable, it is visible to the current CMakeLists file or function and any subdirectory 's CMakeLists files, any functions or macros that are invoked, and any files that are included using the INCLUDE() (page 317) command. When a new subdirectory is processed (or a function called), a new variable scope is created and initialized with the current value of all variables in the calling scope. Any new variables created in the child scope, or changes made to existing variables, will not impact the parent scope. Consider the following example:<br />
<br />
<syntaxhighlight lang="text"><br />
function (foo)<br />
message (${test}) # test is 1 here<br />
set (test 2)<br />
message (${test}) # test is 2 here, but only in this scope<br />
endfunction()<br />
<br />
set (test 1)<br />
foo()<br />
message (${test}) # test will still be 1 here<br />
</syntaxhighlight><br />
<br />
In some cases, you might want a function or subdirectory to set a variable in its parent's scope. This is one way for CMake to return a value from a function, and it can be done by using the PARENT_SCOPE option with the set() (page 330) command. We can modify the prior example so that the function foo changes the value of test in its parent's scope as follows:<br />
<br />
<syntaxhighlight lang="text"><br />
function (foo)<br />
message (${test}) # test is 1 here<br />
set (test 2 PARENT_SCOPE)<br />
message (${test}) # test still 1 in this scope<br />
endfunction()<br />
<br />
set (test 1)<br />
foo()<br />
message (${test}) # test will still be 2 here<br />
</syntaxhighlight><br />
<br />
Variables can also represent a list of values. In these cases when the variable is expanded it will be expanded into multiple values. Consider the following example:<br />
<br />
<syntaxhighlight lang="text"><br />
# set a list of items<br />
set (items_to_buy apple orange pear beer)<br />
<br />
# loop over the items<br />
foreach (item ${items_to_buy})<br />
message ( "Don't forget to buy one ${item}" )<br />
endforeach ()<br />
</syntaxhighlight><br />
<br />
In some cases, you might want to allow the user building your project to set a variable from the CMake user interface. In that case, the variable must be a cache entry. Whenever CMake is run, it produces a cache file in the directory where the binary files are to be written. The values of this cache file are displayed by the CMake user interface. There are a few purposes of this cache. The first is to store the user's selections and choices, so that if they should run CMake again they will not need to reenter that information. For example, the option() (page 327) command creates a Boolean variable and stores it in the cache.<br />
<br />
<syntaxhighlight lang="text"><br />
option (USE_JPEG "Do you want to use the jpeg library")<br />
</syntaxhighlight><br />
<br />
The above line would create a variable called USE_JPEG and put it into the cache. That way the user can set that variable from the user interface and its value will remain in case the user should run CMake again in the future. To create a variable in the cache, use commands like option, find_file() (page 292), or the standard set command with the CACHE option.<br />
<br />
<syntaxhighlight lang="text"><br />
set (USE_JPEG ON CACHE BOOL "include jpeg support?")<br />
</syntaxhighlight><br />
<br />
When you use the cache option, also provide the type of the variable and a documentation string. The type of the variable is used by the GUI to control how that variable is set and displayed, but the value is always a string. Variable types include BOOL, PATH, FILEPATH, and STRING. The documentation string is used by the GUI to provide online help.<br />
<br />
Another purpose of the cache is to persistently store values between CMake runs. These entries may not be visible or adjustable by the user. Typically these values are system-dependent variables such as CMAKE_WORDS_BIGENDIAN, which require CMake to compile and run a program to determine their value. Once these values have been determined, they are stored in the cache to avoid having to recompute them every time CMake is run. CMake generally tries to limit these variables to properties that should never change (such as the byte order of the machine you are on). If you significantly change your computer, either by changing the operating system or switching to a different compiler, you will need to delete the cache file (and probably all of your binary tree's object files, libraries, and executables).<br />
<br />
Variables that are in the cache also have a property indicating if they are advanced or not. By default, when a CMake GUI is run (such as ccmake or cmake-gui), the advanced cache entries are not displayed. This is so the user can focus on the cache entries that they should consider changing. The advanced cache entries are other options that the user can modify, but typically will not. It is not unusual for a large software proj ect to have fifty or more options, and the advanced property lets a software project divide them into key options for most users and advanced options for advanced users. Depending on the project, there may not be any non-advanced cache entries. To make a cache entry advanced, the mark_as_advanced() (page 325) command is used with the name of the variable (a.k.a. cache entry).<br />
<br />
In some cases, you m ight want to restrict a cache entry to a limited set of predefined options. You can do this by setting the STRINGS (page 621) property on the cache entry. The following CMakeLists code illustrates this by creating a property named CRYPTOBACKEND as usual, and then setting the STRINGS property on it to a set of three options.<br />
<br />
<syntaxhighlight lang="text"><br />
set (CRYPTOBACKEND "OpenSSL" CACHE STRING<br />
"Select a cryptography backend")<br />
set_property (CACHE CRYPTOBACKEND PROPERTY STRINGS<br />
"OpenSSL" "LibTomCrypt" "LibDES")<br />
</syntaxhighlight><br />
<br />
When cmake-gui is run and the user selects the CRYPTOBACKEND cache entry, they will be presented with a pulldown to select which option they want, as shown in Figure 3.3.<br />
<br />
<<Figure 3.3 : Cache Value Options in cmake-gui>><br />
<br />
A few final points should be made concerning variables and their interaction with the cache. If a variable is in the cache, it can still be overridden in a CMakeLists file using the set command without the CACHE option. Cache values are checked when a referenced variable is not defined in the current scope. The set command will define a variable for the current scope without changing the value in the cache.<br />
<br />
<syntaxhighlight lang="text"><br />
# assume that FOO is set to ON in the cache<br />
<br />
set (FOO OFF)<br />
# sets foo to OFF for processing this CMakeLists file<br />
# and subdirectories; the value in the cache stays ON<br />
</syntaxhighlight><br />
<br />
Once a variable is in the cache, its "cache" value cannot normally be modified from a CMakeLists file. The reasoning behind this is that once CMake has put the variable into the cache with its initial value, the user may then modify that value from the GUI. If the next invocation of CMake overwrote their change back to the set value, the user would never be able to make a change that CMake wouldn't overwrite. A set(FOO ON CACHE BOOL "doc") command will typically only do something when the cache doesn't have the variable in it. Once the variable is in the cache, that command will have no effect.<br />
<br />
<br />
===Build Configurations===<br />
<br />
Build configurations allow a project to be built in different ways for debug, optimized, or any other special set of flags. CMake supports, by default, Debug, Release, MinSizeRel, and RelWithDeblnfo configurations. Debug has the basic debug flags turned on. Release has the basic optimizations turned on. MinSizeRel has flags that produce the smallest object code, but not necessarily the fastest code. RelWithDeblnfo builds an optimized build with debug information as well.<br />
<br />
CMake handles the configurations in slightly different ways depending on the generator being used. The conventions of the native build system are followed when possible. This means that configurations impact the build in different ways when using Makefiles versus using Visual Studio project files.<br />
<br />
The Visual Studio IDE supports the notion of Build Configurations. A default project in Visual Studio usually has Debug and Release configurations. From the IDE you can select build Debug, and the files will be built with Debug flags. The IDE puts all of the binary files into directories with the name of the active configuration. This brings about an extra complexity for projects that build programs that need to be run as part of the build process from custom commands. See the CMAKE_CFG_INTDIR (page 625) variable and the custom commands section for more information about how to handle this issue. The variable CMAKE_CONFIGURATION_TYPES (page 640) is used to tell CMake which configurations to put in the workspace.<br />
<br />
With Makefile-based generators, only one configuration can be active at the time CMake is run, and it is specified with the CMAKE_BUILD_TYPE (page 639) variable. If the variable is empty then no flags are added to the build. If the variable is set to the name of a configuration, then the appropriate variables and rules (such as CMAKE_CXX_FLAGS_<ConfigName>) are added to the compile lines. Makefiles do not use special configuration subdirectories for object files. To build both debug and release trees, the user is expected to create m ultiple build directories using the out-of-source build feature of CMake, and set the CMAKE_BUILD_TYPE to the desired selection for each build. For example:<br />
<br />
<syntaxhighlight lang="text"><br />
# With source code in the directory MyProject<br />
# to build MyProject-debug create that directory, cd into it and<br />
(ccmake ../MyProject ~DCMAKE_BUILD_TYPE=Debug)<br />
# the same idea is used for the release tree MyProject-release<br />
{ccmake ../MyProject ~DCMAKE_BUILD_TYPE=Release)<br />
</syntaxhighlight><br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:MastringCmakeVersion31]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=MastringCmakeVersion31:Chapter_02&diff=5600MastringCmakeVersion31:Chapter 022020-09-21T11:58:11Z<p>Onionmixer: CMAKE Chapter 2</p>
<hr />
<div>==CHAPTER TWO : GETTING STARTED==<br />
<br />
===Getting and Installing CMake on Your Computer==<br />
<br />
Before using CMake, you will need to install or build the CMake binaries on your system. On many systems, you may find that CMake is already installed or is available for install with the standard package manager tool for the system. Cygwin, Debian, FreeBSD, OS X MacPorts, Mac OS X Fink, and many others all have CMake distributions. If your system does not have a CMake package, you can find CMake precompiled for many common architectures at www.cmake.org. If you do not find precompiled binaries for your system, then you can build CMake from source. To build CMake, you will need a modern C++ compiler.<br />
<br />
====UNIX and Mac Binary Installations====<br />
<br />
If your system provides CMake as one of its standard packages, follow your system's package installation instructions. If your system does not have CMake, or has an out-of-date version of CMake, you can down load precompiled binaries from www.cmake.org. The binaries from www.cmake.org come in the form of a compressed .tar file. To install , simply extract the compressed .tar file into a destination directory such as /usr/local. Any directory is allowed, so CMake does not require root privileges for installation.<br />
<br />
====Windows Binary Installation====<br />
<br />
For Windows, CMake provides an installer executable available for download from www.cmake.org. To install this file, simply run the executable on the Windows machine where you want to install CMake. You will be able to run CMake from the Start Menu or from the command line after it is installed.<br />
<br />
===Building CMake Yourself===<br />
<br />
If binaries are not available for your system, or if binaries are not available for the version of CMake you wish to use, you can build CMake from the source code. You can obtain the CMake source code from the www.cmake.org download page. Once you have the source code, it can be built in two different ways. If you have a version of CMake on your system, you can use it to build other versions of CMake. The current development version of CMake can generally be built from the previous release of CMake. This is how new versions of CMake are built on most Windows systems.<br />
<br />
The second way to build CMake is by running its bootstrap build script. To do this, change directory into your CMake source directory and type:<br />
<br />
<syntaxhighlight lang="text"><br />
./bootstrap<br />
make<br />
make install<br />
</syntaxhighlight><br />
<br />
The make install step is optional since CMake can run directly from the build directory if desired. On UNIX, if you are not using the system's C++ compiler, you need to tell the bootstrap script which compiler you want to use. This is done by setting the environment variable CXX before running bootstrap. If you need to use any special flags with your compiler, set the CXXFLAGS environment variable. For example, on the SGI with the 7.3X compiler, you would build CMake like this:<br />
<br />
<syntaxhighlight lang="text"><br />
cd CMake<br />
(setenv CXX CC; setenv CXXFLAGS "-LANG:std"; ./bootstrap)<br />
make<br />
make install<br />
</syntaxhighlight><br />
<br />
===Basic CMake Usage and Syntax===<br />
<br />
Using CMake is simple. The build process is controlled by creating one-or-more CMakeLists files (actually CMakeLists.txt but this guide will leave off the extension in most cases) in each of the directories that make up a project. The CMakeLists files contain the project description in CMake's simple language. The language is expressed as a series of comments and commands. Comments start with # and run to the end of the line. Commands have the form<br />
<br />
<syntaxhighlight lang="text"><br />
command (args...)<br />
</syntaxhighlight><br />
<br />
where command is the name of the command, and args is a whitespace-separated list of arguments. Each command is evaluated in the order that it appears in the CMakeLists file. CMake is no longer case insensitive to command names as of version 2.2, so where you see command, you could use COMMAND or C ommand instead. Older versions of CMake only accepted uppercase commands.<br />
<br />
<syntaxhighlight lang="text"><br />
command ("") # 1 quoted argument<br />
command ("a b c") # 1 quoted argument<br />
command ("a;b;c") # 1 quoted argument<br />
command ("a" "b" "c") # 3 quoted arguments<br />
command (a b c) # 3 unquoted arguments<br />
command (a;b;c) # 1 unquoted arguments expands to 3<br />
</syntaxhighlight><br />
<br />
CMake supports simple variables storing strings. Use the set() (page 330) command to set variable values. In its simplest form, the first argument to set is the name of the variable and the rest of the arguments are the values. Multiple value arguments are packed into a semicolon-separated list and stored in the variable as a string. For example:<br />
<br />
<syntaxhighlight lang="text"><br />
set (Foo "") # 1 quoted arg -> value is ""<br />
set (Foo a) # 1 unquoted arg -> value is "a"<br />
set (Foo "a b c") # 1 quoted arg -> value is "a b c"<br />
set (Foo a b c) # 3 unquoted args -> value is "a;b;c"<br />
</syntaxhighlight><br />
<br />
Variables may be referenced in command arguments using syntax ${VAR} where VAR is the variable name. If the named variable is not defined, the reference is replaced with an empty string; otherwise it is replaced by the value of the variable. Replacement is performed prior to the expansion of unquoted arguments, so variable values containing semicolons are split into zero-or-more arguments in place of the original unquoted<br />
argument. For example:<br />
<br />
<syntaxhighlight lang="text"><br />
set (Foo a b c) # 3 unquoted args -> value is "a;b;c"<br />
command(${Foo}) # unquoted arg replaced by a;b;c<br />
# and expands to three arguments<br />
command("${Foo}") # quoted arg value is "a;b;c;"<br />
set (Foo "") # 1 quoted arg -> value is empty string<br />
command(${Foo}) # unquoted arg replaced by empty string<br />
# and expands to zero arguments<br />
command(${Foo}) # quoted arg value is empty string<br />
</syntaxhighlight><br />
<br />
System environment variables and Windows registry values can be accessed directly in CMake. To access system environment variables, use the syntax $ENV{VAR}. CMake can also reference registry entries in many commands using a syntax of the form [HKEY_CURRENT_USER\\Software\\path1\\path2;key], where the paths are built from the registry tree and key.<br />
<br />
<br />
===Hello World for CMake===<br />
<br />
For starters, let us consider the simplest possible CMakeLists file. To compile an executable from one source<br />
file, the CMakeLists file would contain two lines:<br />
<br />
<syntaxhighlight lang="text"><br />
project (Hello)<br />
add_executable (Hello Hello.c)<br />
</syntaxhighlight><br />
<br />
To build the Hello executable, follow the process described in Running CMake (See section 0) to generate the build fi les. The project() (page 327) command indicates what the name of the resulting workspace should be and the add_executable() (page 273) command adds an executable target to the build process. That's all there is to it for this simple example. If your project requires a few files, it is also quite easy to modify with the add_executable line as shown below.<br />
<br />
<syntaxhighlight lang="text"><br />
add_executable (Hello Hello.c File2.c File3.c File4.c)<br />
</syntaxhighlight><br />
<br />
add_executable is just one of many commands available in CMake. Consider the more complicated example below.<br />
<br />
<syntaxhighlight lang="text"><br />
cmake_minimum_required (2.6)<br />
project (HELLO)<br />
<br />
set (HELLO_SRCS Hello.c File2.c File3.c)<br />
<br />
if (WIN32)<br />
set (HELLO_SRCS ${HELLO_SRCS} wWinSupport .c)<br />
else ()<br />
set (HELLO_SRCS ${HELLO_SRCS} UnixSupport .c)<br />
endif ()<br />
<br />
add_executable (Hello ${HELLO_SRCS))<br />
<br />
# look for the Tcl library<br />
find_library (TCL_LIBRARY<br />
NAMES tcl tcl84 tcl183 tcl82 tcl80<br />
PATHS /opt/TclTk/lib c:/TelTk/lib<br />
)<br />
<br />
if (TCL_LIBRARY)<br />
target_link_library (Hello ${TCL_LIBRARY}}<br />
endif ()<br />
</syntaxhighlight><br />
<br />
In this example, the set() (page 330) command is used to group together source files into a list. The if() (page 313) command is used to add either WinSupport.c or UnixSupport.c to this list based on whether or not CMake is running on Windows. Finally, the add_executable() (page 273) command is used to build the executable with the files listed in the variable HELLO_SRCS. The find_library() (page 294) command looks for the Tcl library under a few different names and in a few different paths. An if command checks if the TCL_LIBRARY was found, and if so, adds it to the link line for the Hello executable target.<br />
<br />
<br />
===How to Run CMake?===<br />
<br />
Once CMake has been installed on your system, using it to build a project is easy. There are two main directories CMake uses when building a project: the source directory and the binary directory. The source directory is where the source code for your project is located. This is also where the CMakeLists files will be found. The binary directory is where you want CMake to put the resulting object files, libraries, and executables. CMake will not write any files to the source directory, only to the binary directory. We encourage use of "out-of-source" builds in which the source and binary directories are different, but one may also perform "in-source" builds in which the source and binary directories are the same.<br />
<br />
CMake supports both in-source and out-of-source builds on all operating systems. This means that you can configure your build to be completely outside of the source code tree, which makes it very easy to remove all of the files generated by a build. Having the build tree differ from the source tree also makes it easy to support having multiple builds of a single source tree. This is useful when you want to have multiple builds with different options but just one copy of the source code. Now let us consider the specifics of running CMake using its Qt-based GUI and command line interfaces.<br />
<br />
====Running CMake's Qt Interface====<br />
<br />
CMake includes a Qt-based user interface that can be used on most platforms, including UNIX, Mac OS X, and Windows. This interface is included in the CMake source code, but you will need an installation of Qt on your system in order to build it.<br />
<br />
<<Figure 2.1 : Qt based CMake GUI>><br />
<br />
On Windows, the executable is named cmake-gui.exe and it should be in your Start menu under Program Files. There may also be a shortcut on your desktop, or if you bui lt CMake from the source, it will be in the build directory. For UNIX and Mac users, the executable is named cmake-gui and it can be found where you installed the CMake executables. A GUI will appear similar to what is shown in Figure 2.1. The top two fields are the source code and binary directories. They allow you to specify where the source code is located for what you want to compile, and where the resulting binaries should be placed. You should set these two values first. If the binary directory you specify does not exist, it will be created for you. If the binary directory has been configured by CMake before, it will then automatically set the source tree.<br />
<br />
The middle area is where you can specify different options for the build process. More obscure variables may be hidden, but can be seen if you select "Advanced View" from the view pulldown. You can search for values in the middle area by typing all or part of the name into the search box. This can be handy for finding specific settings or options in a large project. The bottom area of the window includes the Configure and Generate<br />
buttons as well as a progress bar and scrollable output window.<br />
<br />
Once you have specified the source code and binary directories, click the Configure button. This will cause CMake to read in the CMakeLists files from the source code directory and update the cache area to display any new options for the project. If you are running cmake-gui for the first time on this binary directory it will prompt you to determine which generator you wish to use, as shown in Figure 2.2. This dialog also presents options for customjzing and tweaking the compilers you wish to use for the build.<br />
<br />
After the first configure, you can adj ust the cache settings if desired and click the Configure button again. New values that were created by the configure process will be colored red. To be sure you have seen all possible values, click Configure until none of the values are red and you are happy with all the settings. Once you are done configuring, click the Generate button to produce the appropriate files.<br />
<br />
It is important that you make sure that your environment is suitable for running cmake-gui. If you are using an IDE such as Visual Studio, your environment will be setup correctly. If you are using NMake or MinGW, make sure that the compiler can run from your environment. You can either directly set the required environment variables for your compiler or use a shell in which they are already set. For example, Microsoft Visual Studio has an option on the start menu for creating a Visual Studio Command Prompt. This opens up a command prompt window that has its environment already setup for Visual Studio. You should run cmake-gui from this command prompt if you want to use NMake Makefiles. The same approach applies to MinGW; you should run cmake-gui from a MinGW shell that has a working compiler in its path.<br />
<br />
When cmake-gui finishes, it will have generated the build files in the binary directory you specified. If Visual Studio was selected as the generator, a MSVC workspace (or solution) file is created. This file's name is based on the name of the project you specified in the project() (page 327) command at the beginning of your CMakeLists file. For many other generator types, Makefiles are generated. The next step in this process is to open the workspace with MSVC. Once open, the project can be built in the normal manner of Microsoft Visual C++. The ALL_BUILD target can be used to build all of the libraries and executables in the package. If you are using a Makefile build type, then you would build by running make or nmake on the resulting Makefiles.<br />
<br />
<<Figure 2.2: Selecting a Generator>><br />
<br />
====Running the ccmake Curses Interface====<br />
<br />
On most UNIX platforms, if the curses library is supported, CMake provides an executable called ccmake. This interface is a terminal-based text application that is very similar to the Qt-based GUI. To run ccmake, change directory (cd) to the directory where you want the binaries to be placed. This can be the same directory as the source code for what we call in-source builds, or it can be a new directory you create. Then run ccmake with the path to the source directory on the command line. For in-source builds, use "." for the source directory. This will start the text interface as shown in Figure 2.3 (in this case, the cache variables are from VTK and most are set automatically).<br />
<br />
<<Figure 2.3 : ccmake running on UNIX>><br />
<br />
Brief instructions are displayed in the bottom of the window. If you hit the "c" key, it will configure the project. You should always configure after changing values in the cache. To change values, use the arrow keys to select cache entries, and hit the enter key to edit them. Boolean values will toggle with the enter key. Once you have set all the values as you like, you can hit the "g" key to generate the Makefiles and exit. You can also hit "h" for help, "q" to quit, and "t" to toggle the viewing of advanced cache entries. Two examples of CMake usage on the UNIX platform follow for a hello world project called Hello. In the first example, an in-source build is performed.<br />
<br />
<syntaxhighlight lang="text"><br />
cd Hello<br />
ccmake .<br />
make<br />
</syntaxhighlight><br />
<br />
In the second example, an out-of-source build is performed.<br />
<br />
<syntaxhighlight lang="text"><br />
mkdir Hello-Linux<br />
cd Hello-Linux<br />
ccmake ../Hello<br />
make<br />
</syntaxhighlight><br />
<br />
<br />
====Running CMake from the Command Line====<br />
<br />
From the command line, CMake can be run as an interactive question-and-answer session or as a non-interactive program. To run in interactive mode, just pass the "-i" option to CMake. This will cause CMake to ask you for a value for each entry in the cache file for the project. CMake will provide reasonable defaults, just like it does in the GUI and curses-based interfaces. The process stops when there are no longer any more questions to ask. An example of using the interactive mode of CMake is provided below.<br />
<br />
<syntaxhighlight lang="text"><br />
$ cmake -i -G "NMake Makefiles" ../CMake<br />
Would you like to see advanced options? [No]:<br />
Please wait while cmake processes CMakeLists.tat files....<br />
<br />
Variable Name: BUILD_TESTING<br />
Description: Build the testing tree.<br />
Current Value: ON<br />
New Value (Enter to keep current value):<br />
<br />
Variable Name: CMAKE_INSTALL_PREFIX<br />
Description: Install path prefix, prepended onto install directories.<br />
Current Value: C:/Program Files/CMake<br />
New Value (Enter to keep current value}:<br />
<br />
Please wait while cmake processes CMakebists.txt files....<br />
<br />
CMake complete, run make to build project.<br />
</syntaxhighlight><br />
<br />
Using CMake to build a project in non-interactive mode is a simple process if the project has few or no options. For larger projects like VTK, using ccmake, cmake -i, or cmake-gui is recommended. To build a project with a non-interactive CMake, first change directory to where you want the binaries to be placed. For an in-source build, run cmake. and pass in any options using the -D flag. For out-of-source builds, the process is the same except you run cmake and also provide the path to the source code as its argument. Then type make and your project should compile. Some projects will have install targets as well and you can type make install to install them.<br />
<br />
====Specifying the Compiler to CMake====<br />
<br />
On some systems, you may have more than one compiler to choose from or your compiler may be in a non-standard place. In these cases, you will need to specify to CMake where your desired compiler is located. There are three ways to specify this: the generator can specify the compiler; an environment variable can be set; or a cache entry can be set. Some generators are tied to a specific compiler; for example, the Visual Studio 8 generator always uses the Microsoft Visual Studio 8 compiler. For Makefile-based generators, CMake will try a list of usual compilers until it finds a working one. The list can be found in the files:<br />
<br />
<syntaxhighlight lang="text"><br />
Modules/CmakeDetermineCCompiler.cmake and<br />
Modules/CmakeDetermineCXXCompiler.cmake<br />
</syntaxhighlight><br />
<br />
The lists can be preempted with environment variables that can be set before CMake is run. The CC environment variable specifies the C compiler, while CXX specifies the C++ compiler. You can specify the compilers<br />
directly on the command line by using -D CMAKE_CXX_COMPILER=cl for example.<br />
<br />
Once CMake has been run and picked a compiler, you can change the selection by changing the cache entries CMAKE_CXX_COMPILER and CMAKE_C_COMPILER, although this is not recommended. The problem with doing this is that the project you are configuring may have already run some tests on the compiler to determine what it supports. Changing the compiler does not normally cause these tests to be rerun, which can lead to incorrect results. If you m ust change the compiler, start over with an empty binary directory. The flags for the compiler and the linker can also be changed by setting environment variables. Setting LDFLAGS will initialize the cache values for link flags, while CXXFLAGS and CFLAGS will initialize CMAKE_CXX_FLAGS and CMAKE_C_FLAGS respectively.<br />
<br />
<br />
====Dependency Analysis====<br />
<br />
CMake has powerful, built-in implicit dependency (#include) analysis capabilities for C, C++, and Fortran source code files. CMake also has limited support for Java dependencies. Since Integrated Development Environments (IDEs) support and maintain their own dependency information, CMake skips this step for those build systems. However, Makefiles with a make program do not know how to automatically compute and keep dependency information up-to-date. For these builds, CMake automatically computes dependency information for C, C++, and Fortran files. Both the generation and maintenance of these dependencies are automatically done by CMake. Once a project is initially configured by CMake, users only need to run make, and CMake does the rest of the work. CMake's dependencies fully support parallel bui lds for multiprocessor systems.<br />
<br />
Although users do not need to know how CMake does this work, it may be useful to look at the dependency information files for a project. The information for each target is stored in four files called depend.make, flags.make, build.make, and DependInfo.cmake. depend.make stores the depend information for all the object files in the directory. flags.make contains the compile flags used for the source files of this target. If they change then the fi les will be recompiled. DependInfo.cmake is used to keep the dependency information up-to-date and contains information about which files are part of the project and the languages they are in. Finally, the rules for building the dependencies are stored in build.make. If a dependency is out-of-date then all of the dependencies for that target will be recomputed, keeping the dependency information current.<br />
<br />
===Editing C Makelists Files===<br />
<br />
CMakeLists files can be edited in almost any text editor. Some editors, such as Notepad++, come with CMake syntax highlighting and indentation support built-in. For editors such as Emacs or Vim , CMake includes indentation and syntax highlighting modes. These can be found in the Auxiliary directory of the source distribution, or downloaded from the CMake web site. The file cmake-mode.el is the Emacs mode, and cmake-indent.vim and cmake-syntax.vim are used by Vim. Within Visual Studio, CMakeLists files are listed as part of the project and you can edit them simply by double-clicking on them. Within any of the supported generators (Makefiles, Visual Studio, etc.), if you edit a CMakeLists file and rebuild, there are rules that will automatically invoke CMake to update the generated files (e.g. Makefiles or project files) as required. This helps to assure that your generated files are always in sync with your CMakeLists files.<br />
<br />
Since CMake computes and maintains dependency information, CMake executables must always be available (though they don't have to be in your PATH) when make or an IDE is being run on CMake-generated files. This means that if a CMake input file changes on disk, your build system will automatically re-run CMake and produce up-to-date build files. For this reason, you generally should not generate Makefiles or projects with CMake and move them to another machine that does not have CMake installed.<br />
<br />
===Setting Initial Values for CMake===<br />
<br />
While CMake works well in an interactive mode, sometimes you will need to set up cache entries without running a GUI. This is common when setting up nightly dashboards, or if you will be creating many build trees with the same cache values. In these cases, the CMake cache can be initialized in two different ways. The first way is to pass the cache values on the CMake command line using -D CACHE_VAR:TYPE=VALUE arguments. For example, consider the following nightly dashboard script for a UNIX machine:<br />
<br />
<syntaxhighlight lang="text"><br />
#!/bin/tcsh<br />
<br />
cd ${HOME}<br />
<br />
# wipe out the old binary tree and then create it again<br />
rm -rf Foo-Linux<br />
mkdir Foo-Linux<br />
cd Foo-Linux<br />
<br />
# run cmake to setup the cache<br />
cmake DBUILD_TESTING:BOOL=ON <etc...> ../Foo<br />
<br />
# generate the dashboard<br />
ctest -D Nightly<br />
</syntaxhighlight><br />
<br />
The same idea can be used with a batch file on Windows. The second way is to create a file to be loaded using CMake's -C option. In this case, instead of setting up the cache with -D options, it is done though a file that is parsed by CMake. The syntax for this file is the standard CMakeLists syntax, which is typically a series of set() (page 330) commands such as:<br />
<br />
<syntaxhighlight lang="text"><br />
#Build the vtkHybrid kit.<br />
set (VTK_USE_HYBRID ON CACHE BOOL "doc string")<br />
</syntaxhighlight><br />
<br />
In some cases there might be an existing cache, and you want to force the cache values to be set a certain way. For example, say you want to turn Hybrid on even if the user has previously run CMake and turned it off. Then you can do<br />
<br />
<syntaxhighlight lang="text"><br />
#Build the vtkHybrid kit always,<br />
set (VTK_USE_HYBRID ON CACHE BOOL "doc" FORCE)<br />
</syntaxhighlight><br />
<br />
Another option is that you want to set and then hide options so the user will not be tempted to adjust them later on. This can be done using the following commands<br />
<br />
<syntaxhighlight lang="text"><br />
#Build the vtkHybrid kit always and dont't distract<br />
#the user by showing the option.<br />
set (VTK_USE_HYBRID ON CACHE INTERNAL "doc" FORCE)<br />
make_as_advanced (VTK_USE_HYBRID)<br />
</syntaxhighlight><br />
<br />
You might be tempted to edit the cache file directly, or to "initialize" a project by giving it an initial cache file. This may not work and could cause additional problems in the future. First, the syntax of the CMake cache is subject to change. Second, cache files contain full paths which make them unsuitable for moving between binary trees. If you want to initialize a cache file, use one of the two standard methods described above.<br />
<br />
<br />
===Building Your Project===<br />
<br />
After you have run CMake, your project will be ready to be built. If your target generator is based on Makefiles then you can build your project by changing the directory to your binary tree and typing make (or gmake or nmake as appropriate). If you generated files for an IDE such as Visual Studio, you can start your IDE, load the project files into it, and build as you normally would.<br />
<br />
Another option is to use CMake's --build option from the command line. This option is simply a convenience that allows you to build your project from the command line, even if that requires launching an IDE. The command line options for --build include:<br />
<br />
<syntaxhighlight lang="text"><br />
Usage: cmake --build <dir> [options] [-- [native-options) ]<br />
<br />
Options:<br />
<dir> = Project binary directory to be built.<br />
--target <tgt> = Build <tgt> instead of default targets.<br />
--config <cfg> = For multi-configuration tools, choose <cfg>.<br />
--clean-first = Build target 'clean' first, then build.<br />
= (To clean only, use --target 'clean' .)<br />
<br />
-- = Pass remaining options to the native tool.<br />
</syntaxhighlight><br />
<br />
Even if you are using Visual Studio as your generator, type the following to build your project from the command line if you wish:<br />
<br />
<syntaxhighlight lang="text"><br />
cmake --build <your binary dir><br />
</syntaxhighlight><br />
<br />
That is all there is to installing and running CMake for simple projects. In the following chapters, we will consider CMake in more detail and explain how to use it on more complex software projects.<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:MastringCmakeVersion31]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=MastringCmakeVersion31:Chapter_01&diff=5599MastringCmakeVersion31:Chapter 012020-09-21T11:57:28Z<p>Onionmixer: 오타 수정</p>
<hr />
<div>==CHAPTER ONE :: WHY CMAKE?==<br />
<br />
If you have ever maintained the build and installation process for a software package, you will be interested in CMake. CMake is an open-source build manager for software projects that allows developers to specify build parameters in a simple, portable, text file format. This file is then used by CMake to generate project files for native build tool s including Integrated Development Environments such as Microsoft Visual Studio or Apple's Xcode, as well as UNIX, Linux, CMake, Ninja, and Borland style Makefiles. CMake handles the difficult aspects of building software such as cross-platform builds, system introspection, and user customized builds, in a simple manner that allows users to easily tailor builds for complex hardware and software systems.<br />
<br />
For any project, and especially cross-platform projects, there is a need for a unified build system. Many non CMake-based projects ship with both a UNIX Makefile (or Makefile.in) and a Microsoft Visual Studio workspace. This requires that developers constantly try to keep both build systems up-to-date and consistent with each other. To target additional build systems, such as Xcode, requires even more custom copies of these files, creating an even bigger problem. This problem is compounded if you try to support optional components, such as including JPEG support if libjpeg is available on the system. CMake solves this by consolidating these different operations into one simple, easy-to-understand file format.<br />
<br />
If you have multiple developers working on a project, or multiple target platforms, then the software will have to be built on more than one computer. Given the wide range of installed software and custom options that are involved with setting up a modern computer, the chances are that two computers running the same OS will be slightly different. CMake provides many benefits for single platform, multi-machine development environments including:<br />
<br />
* The ability to automatically search for programs, libraries, and header fi les that m ay be required by the software being built. This includes the ability to consider environment variables and Window's registry settings when searching.<br />
* The ability to build in a directory tree outside of the source tree. This is a useful feature found on many UNIX platforms; CMake provides this feature on Windows as well. This allows a developer to remove an entire build directory without fear of removing source files.<br />
* The ability to create complex, custom commands for automatically generated files such as Qt's moc (qt.nokia.com) or SWIG (www.swig.org) wrapper generators. These commands are used to generate new source files during the build process that are in turn compiled into the software.<br />
* The ability to select optional components at configuration time. For example, several of VTK's (www.vtk.org) libraries are optional, and CMake provides an easy way for users to select which libraries are built.<br />
* The ability to automatically generate workspaces and projects from a simple text file. This can be very handy for systems that have many programs or test cases, each of which requires a separate project file, typically a tedious manual process to create using an IDE.<br />
* The ability to easily switch between static and shared builds. CMake knows how to create shared libraries and modules on all platforms supported. Complicated platform-specific linker flags are handled, and advanced features like built-in run time search paths for shared libraries are supported on many UNIX systems.<br />
* Automatic generation of file dependencies and support for parallel builds on most platforms.<br />
<br />
When developing cross-platform software, CMake provides a number of additional features:<br />
<br />
* The ability to test for machine byte order and other hardware-specific characteristics.<br />
* A single set of build configuration files that work on all platforms. This avoids the problem of developers having to maintain the same information in several different formats inside a project.<br />
* Support for building shared libraries on all platforms that support it.<br />
* The ability to configure files with system-dependent information, such as the location of data files and other information. CMake can create header files that contain information such as paths to data files and other information in the form of #define macros. System specific flags can also be placed in configured header files. This has advantages over command line -D options to the compiler, because it allows other build systems to use the CMake built library without having to specify the exact same command line options used during the build.<br />
<br />
<br />
<br />
===The History of CMake===<br />
<br />
CMake development began in 1999 as part of the Insight Toolkit (ITK, www.itk.org), funded by the U.S.National Library of Medicine. ITK is a large software project that works on many platforms and can interact with many other software packages. To support this, a powerful, yet easy-to-use build tool was required. Having worked with build systems for large projects in the past, the developers designed CMake to address these needs. Since then CMake has continuously grown in popularity, with many projects and developers adopting it for its ease-of-use and flexibility. Since 1999, CMake has been under active development and has matured to the point where it is a proven solution for a wide range of build issues. The most telling example of this is the successful adoption of CMake as the build system of the K Desktop Environment(KDE), arguably the largest open-source software project in existence.<br />
<br />
CMake also incl udes software testing support in the form of CTest. Part of the process of testing software involves building the software, possibly installing it, and determining what parts of the software are appropriate for the current system . This makes CTest a logical extension of CMake as it already has most of this information. In a similar vein, CMake contains CPack, which is designed to support cross-platform distribution of software. It provides a cross-platform approach to creating native installations for your software, making use of existing popular packages such as NSIS, RPM, Cygwin, and PackageMaker.<br />
<br />
CMake continues to track and support popular build tools as they become available. CMake has quickly provided support for new versions of Microsofts's Visual Studio and Apple's Xcode IDE. In addition, support for the new build tool Ninja from Google has been added to CMake. With CMake, once you write your input files you get support for new compilers and build systems for free because the support for them is built into new releases of CMake and not tied to your software distribution. CMake also has ongoing support for cross compiling to other operating systems or embedded devices. Most commands in CMake properly handle the differences between the host system and the target platform when cross-compiling.<br />
<br />
====Why Not Use Autoconf?====<br />
<br />
Before developing CMake, its authors had experience with the existing set of available build tools. Autoconf combined with Automake provides some of the same functionality as CMake, but to use these tools on a Windows platform requires the installation of many additional tools not found natively on a Windows box. In addition to requiring a host of tools, autoconf can be difficult to use or extend, and impossible for performing some tasks that are easy in CMake. Even if you do get autoconf and its required environment running on your system, it generates Makefiles that will force users to the command line. CMake on the other hand, provides a choice, allowing developers to generate project files that can be used directly from the IDE to which Windows and Xcode developers are accustomed.<br />
<br />
While autoconf supports user-specified options, it does not support dependent options where one option depends on another property or selection. For example, in CMake you could have a user option to have multithreading be dependent on first determining if the user's system has m ultithreading support. CMake provides an interactive user interface, making it easy for the user to see which options are available and how to set them.<br />
<br />
For UNIX users, CMake also provides automated dependency generation that is not done directly by autoconf. CMake's simple input format is also easier to read and maintain than a combination of Makefile.in and configure.in files. The ability of CMake to remember and chain library dependency information has no equivalent in autoconf/automake.<br />
<br />
====Why Not Use JAM, qmake, SCons, or ANT?====<br />
<br />
Other tools such as ANT, qmake, SCons, and JAM have taken different approaches to solving these problems and they have helped us to shape CMake. Of the four, qmake is the most similar to CMake, although it lacks much of the system interrogation that CMake provides. Qmake's input format is more closely related to a traditional Makefile. ANT, JAM, and SCons are also cross-platform although they do not support generating native project files. They do break away from the traditional Makefile-oriented input with ANT using XML; JAM using its own language; and SCons using Python. A number of these tools run the compiler directly, as opposed to letting the system's build process perform that task. Many of these tools require other tools such as Python or Java to be instaJled before they will work.<br />
<br />
====Why Not Script It Yourself?====<br />
<br />
Some projects use existing scripting languages such as Perl or Python to configure build processes. Although similar functionality can be achieved with systems like this, over-use of these tools can make the build process more of an Easter egg hunt than a simple-to-use build system . When building your software package, users are forced to find and install version 4.3.2 of this and 3.2.4 of that before they can even start the build process. To avoid that problem, it was decided that CMake would require no more tools than the software it was being used to build would require. At a minimum, using CMake requires a C compiler, that compiler's native build tools, and a CMake executable. CMake was written in C++, requires only a C++ compiler to build, and precompiled binaries are available for most systems. Scripting it yourself also typically means you will not be generating native Xcode or Visual Studio workspaces, making Mac and Windows builds limited.<br />
<br />
====On What Platforms Does CMake Run?====<br />
<br />
CMake runs on a wide variety of platforms including Microsoft Windows, Apple Mac OS X, and most UNIX or UNIX-like platforms. At the time of the writing of this book, CMake was tested nightly on the following platforms: Windows 98/2000/XPNista/7, AIX, HPUX, IRIX, Linux, Mac OS X, Solaris, OSF, QNX, CY GWIN, MinGW, and FreeBSD. You can check www.cmake.org for a current list of tested platforms.<br />
<br />
Likewise, CMake supports most common compilers. It supports the GNU compiler on all CMake-supported platforms. Other tested compilers include Visual Studio 6 through 11, Intel C, SGI CC, Mips Pro, Borland, Sun CC, and HP aCC. CMake should work for most UNIX-style compilers out-of-the-box. If the compiler takes arguments in a strange way, then see the section Porting CMake to New Platform for information on how to customjze CMake for a new compiler.<br />
<br />
====How Stable is CMake?====<br />
<br />
Before adopting any new technology or tool for a project, a developer will want to know how well supported and popular the tool is. Over the past 12 years, CMake has grown in popularity as a build tool. Both the developer and user communities continue to grow. The website ohlo (http://www.ohloh.net) reports that there are over 8,000,000 lines of CMake code in existence. CMake has continued to develop support for new build technologies and tools as they become available. The CMake development team has a strong commitment to backwards compatibility. If CMake can build your project once, it should always be able to build your project. Also, since CMake is an open-source project, the source code is always available for a project to edit and patch as needed.<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:MastringCmakeVersion31]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=MastringCmakeVersion31:Chapter_01&diff=5598MastringCmakeVersion31:Chapter 012020-09-21T11:56:57Z<p>Onionmixer: CMAKE Chapter 1</p>
<hr />
<div>==CHAPTER TWO :: WHY CMAKE?==<br />
<br />
If you have ever maintained the build and installation process for a software package, you will be interested in CMake. CMake is an open-source build manager for software projects that allows developers to specify build parameters in a simple, portable, text file format. This file is then used by CMake to generate project files for native build tool s including Integrated Development Environments such as Microsoft Visual Studio or Apple's Xcode, as well as UNIX, Linux, CMake, Ninja, and Borland style Makefiles. CMake handles the difficult aspects of building software such as cross-platform builds, system introspection, and user customized builds, in a simple manner that allows users to easily tailor builds for complex hardware and software systems.<br />
<br />
For any project, and especially cross-platform projects, there is a need for a unified build system. Many non CMake-based projects ship with both a UNIX Makefile (or Makefile.in) and a Microsoft Visual Studio workspace. This requires that developers constantly try to keep both build systems up-to-date and consistent with each other. To target additional build systems, such as Xcode, requires even more custom copies of these files, creating an even bigger problem. This problem is compounded if you try to support optional components, such as including JPEG support if libjpeg is available on the system. CMake solves this by consolidating these different operations into one simple, easy-to-understand file format.<br />
<br />
If you have multiple developers working on a project, or multiple target platforms, then the software will have to be built on more than one computer. Given the wide range of installed software and custom options that are involved with setting up a modern computer, the chances are that two computers running the same OS will be slightly different. CMake provides many benefits for single platform, multi-machine development environments including:<br />
<br />
* The ability to automatically search for programs, libraries, and header fi les that m ay be required by the software being built. This includes the ability to consider environment variables and Window's registry settings when searching.<br />
* The ability to build in a directory tree outside of the source tree. This is a useful feature found on many UNIX platforms; CMake provides this feature on Windows as well. This allows a developer to remove an entire build directory without fear of removing source files.<br />
* The ability to create complex, custom commands for automatically generated files such as Qt's moc (qt.nokia.com) or SWIG (www.swig.org) wrapper generators. These commands are used to generate new source files during the build process that are in turn compiled into the software.<br />
* The ability to select optional components at configuration time. For example, several of VTK's (www.vtk.org) libraries are optional, and CMake provides an easy way for users to select which libraries are built.<br />
* The ability to automatically generate workspaces and projects from a simple text file. This can be very handy for systems that have many programs or test cases, each of which requires a separate project file, typically a tedious manual process to create using an IDE.<br />
* The ability to easily switch between static and shared builds. CMake knows how to create shared libraries and modules on all platforms supported. Complicated platform-specific linker flags are handled, and advanced features like built-in run time search paths for shared libraries are supported on many UNIX systems.<br />
* Automatic generation of file dependencies and support for parallel builds on most platforms.<br />
<br />
When developing cross-platform software, CMake provides a number of additional features:<br />
<br />
* The ability to test for machine byte order and other hardware-specific characteristics.<br />
* A single set of build configuration files that work on all platforms. This avoids the problem of developers having to maintain the same information in several different formats inside a project.<br />
* Support for building shared libraries on all platforms that support it.<br />
* The ability to configure files with system-dependent information, such as the location of data files and other information. CMake can create header files that contain information such as paths to data files and other information in the form of #define macros. System specific flags can also be placed in configured header files. This has advantages over command line -D options to the compiler, because it allows other build systems to use the CMake built library without having to specify the exact same command line options used during the build.<br />
<br />
<br />
<br />
===The History of CMake===<br />
<br />
CMake development began in 1999 as part of the Insight Toolkit (ITK, www.itk.org), funded by the U.S.National Library of Medicine. ITK is a large software project that works on many platforms and can interact with many other software packages. To support this, a powerful, yet easy-to-use build tool was required. Having worked with build systems for large projects in the past, the developers designed CMake to address these needs. Since then CMake has continuously grown in popularity, with many projects and developers adopting it for its ease-of-use and flexibility. Since 1999, CMake has been under active development and has matured to the point where it is a proven solution for a wide range of build issues. The most telling example of this is the successful adoption of CMake as the build system of the K Desktop Environment(KDE), arguably the largest open-source software project in existence.<br />
<br />
CMake also incl udes software testing support in the form of CTest. Part of the process of testing software involves building the software, possibly installing it, and determining what parts of the software are appropriate for the current system . This makes CTest a logical extension of CMake as it already has most of this information. In a similar vein, CMake contains CPack, which is designed to support cross-platform distribution of software. It provides a cross-platform approach to creating native installations for your software, making use of existing popular packages such as NSIS, RPM, Cygwin, and PackageMaker.<br />
<br />
CMake continues to track and support popular build tools as they become available. CMake has quickly provided support for new versions of Microsofts's Visual Studio and Apple's Xcode IDE. In addition, support for the new build tool Ninja from Google has been added to CMake. With CMake, once you write your input files you get support for new compilers and build systems for free because the support for them is built into new releases of CMake and not tied to your software distribution. CMake also has ongoing support for cross compiling to other operating systems or embedded devices. Most commands in CMake properly handle the differences between the host system and the target platform when cross-compiling.<br />
<br />
====Why Not Use Autoconf?====<br />
<br />
Before developing CMake, its authors had experience with the existing set of available build tools. Autoconf combined with Automake provides some of the same functionality as CMake, but to use these tools on a Windows platform requires the installation of many additional tools not found natively on a Windows box. In addition to requiring a host of tools, autoconf can be difficult to use or extend, and impossible for performing some tasks that are easy in CMake. Even if you do get autoconf and its required environment running on your system, it generates Makefiles that will force users to the command line. CMake on the other hand, provides a choice, allowing developers to generate project files that can be used directly from the IDE to which Windows and Xcode developers are accustomed.<br />
<br />
While autoconf supports user-specified options, it does not support dependent options where one option depends on another property or selection. For example, in CMake you could have a user option to have multithreading be dependent on first determining if the user's system has m ultithreading support. CMake provides an interactive user interface, making it easy for the user to see which options are available and how to set them.<br />
<br />
For UNIX users, CMake also provides automated dependency generation that is not done directly by autoconf. CMake's simple input format is also easier to read and maintain than a combination of Makefile.in and configure.in files. The ability of CMake to remember and chain library dependency information has no equivalent in autoconf/automake.<br />
<br />
====Why Not Use JAM, qmake, SCons, or ANT?====<br />
<br />
Other tools such as ANT, qmake, SCons, and JAM have taken different approaches to solving these problems and they have helped us to shape CMake. Of the four, qmake is the most similar to CMake, although it lacks much of the system interrogation that CMake provides. Qmake's input format is more closely related to a traditional Makefile. ANT, JAM, and SCons are also cross-platform although they do not support generating native project files. They do break away from the traditional Makefile-oriented input with ANT using XML; JAM using its own language; and SCons using Python. A number of these tools run the compiler directly, as opposed to letting the system's build process perform that task. Many of these tools require other tools such as Python or Java to be instaJled before they will work.<br />
<br />
====Why Not Script It Yourself?====<br />
<br />
Some projects use existing scripting languages such as Perl or Python to configure build processes. Although similar functionality can be achieved with systems like this, over-use of these tools can make the build process more of an Easter egg hunt than a simple-to-use build system . When building your software package, users are forced to find and install version 4.3.2 of this and 3.2.4 of that before they can even start the build process. To avoid that problem, it was decided that CMake would require no more tools than the software it was being used to build would require. At a minimum, using CMake requires a C compiler, that compiler's native build tools, and a CMake executable. CMake was written in C++, requires only a C++ compiler to build, and precompiled binaries are available for most systems. Scripting it yourself also typically means you will not be generating native Xcode or Visual Studio workspaces, making Mac and Windows builds limited.<br />
<br />
====On What Platforms Does CMake Run?====<br />
<br />
CMake runs on a wide variety of platforms including Microsoft Windows, Apple Mac OS X, and most UNIX or UNIX-like platforms. At the time of the writing of this book, CMake was tested nightly on the following platforms: Windows 98/2000/XPNista/7, AIX, HPUX, IRIX, Linux, Mac OS X, Solaris, OSF, QNX, CY GWIN, MinGW, and FreeBSD. You can check www.cmake.org for a current list of tested platforms.<br />
<br />
Likewise, CMake supports most common compilers. It supports the GNU compiler on all CMake-supported platforms. Other tested compilers include Visual Studio 6 through 11, Intel C, SGI CC, Mips Pro, Borland, Sun CC, and HP aCC. CMake should work for most UNIX-style compilers out-of-the-box. If the compiler takes arguments in a strange way, then see the section Porting CMake to New Platform for information on how to customjze CMake for a new compiler.<br />
<br />
====How Stable is CMake?====<br />
<br />
Before adopting any new technology or tool for a project, a developer will want to know how well supported and popular the tool is. Over the past 12 years, CMake has grown in popularity as a build tool. Both the developer and user communities continue to grow. The website ohlo (http://www.ohloh.net) reports that there are over 8,000,000 lines of CMake code in existence. CMake has continued to develop support for new build technologies and tools as they become available. The CMake development team has a strong commitment to backwards compatibility. If CMake can build your project once, it should always be able to build your project. Also, since CMake is an open-source project, the source code is always available for a project to edit and patch as needed.<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:MastringCmakeVersion31]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=MastringCmakeVersion31&diff=5597MastringCmakeVersion312019-08-09T01:31:53Z<p>Onionmixer: Mastring Cmake 메인페이지 추가</p>
<hr />
<div>;Mastring Cmake Version 3.1<br />
<br />
원본-영어<br><br />
Kitware Inc.<br />
<br />
<br />
번역진행<br><br />
'''Google Translation Service'''<br />
<br />
<br />
검수진행<br><br />
'''없음'''<br />
<br />
----<br />
===Mastring Cmake===<br />
<br />
'''번역관련 내용'''<br />
<br />
* [[:MastringCmakeVersion31:transdic|번역관련 기타내용]]<br />
<br />
<br />
===Book===<br />
<br />
* [[:MastringCmakeVersion31:Contents|목차]]<br />
<br />
<br />
* [[:MastringCmakeVersion31:Chapter_01|Chapter 01 Why CMake?]]<br />
* [[:MastringCmakeVersion31:Chapter_02|Chapter 02 Getting Started]]<br />
* [[:MastringCmakeVersion31:Chapter_03|Chapter 03 Key Concepts]]<br />
* [[:MastringCmakeVersion31:Chapter_04|Chapter 04 Writing CMakeLists Files]]<br />
* [[:MastringCmakeVersion31:Chapter_05|Chapter 05 System Inspection]]<br />
* [[:MastringCmakeVersion31:Chapter_06|Chapter 06 Custom Commands And Targets]]<br />
* [[:MastringCmakeVersion31:Chapter_07|Chapter 07 Converting Existing Systems To CMake]]<br />
* [[:MastringCmakeVersion31:Chapter_08|Chapter 08 Cross Compiling With CMake]]<br />
* [[:MastringCmakeVersion31:Chapter_09|Chapter 09 Packaging With CPack]]<br />
* [[:MastringCmakeVersion31:Chapter_10|Chapter 10 Automation & Testing With CMake]]<br />
* [[:MastringCmakeVersion31:Chapter_11|Chapter 11 Porting CMake to New Platforms and Languages]]<br />
* [[:MastringCmakeVersion31:Chapter_12|Chapter 12 Tutorials]]<br />
* [[:MastringCmakeVersion31:Appendix_A|Appendix A Command-Line Tools]]<br />
* [[:MastringCmakeVersion31:Appendix_B|Appendix B Interactive Dialogs]]<br />
* [[:MastringCmakeVersion31:Appendix_C|Appendix C Reference Manuals]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=DesignPatternSmalltalkCompanion:Head01&diff=5596DesignPatternSmalltalkCompanion:Head012018-12-07T06:12:18Z<p>Onionmixer: 오타수정</p>
<hr />
<div>===서론===<br />
<br />
Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides (Gamma, 1995)가 집필한 ''Design Patterns: Elements of Reusable Object-Oriented Software'' 의 자매편인 ''The Design Patterns Smalltalk Companion'' 을 읽게 된 것을 환영한다. '''디자인 패턴''' 편은 디자인 패턴을 처음으로 다룬 서적은 아니었지만 소프트웨어 공학분야에 작은 혁명을 불러왔다. 이제 설계자들은 디자인 패턴의 언어로 대화하며 디자인 패턴과 관련된 워크숍, 출판물, 웹사이트 수도 그간 급증해왔다. 현재 디자인 패턴은 객체지향 프로그래밍 연구 및 개발dml 주요 주제일 뿐만 아니라 새로이 디자인 패턴 공동체도 생겨나고 있다.<br />
<br />
'''디자인 패턴''' 편에서는 객체지향 프로그래밍 언어에서 실행되는 애플리케이션에 사용되는 23 가지의 디자인 패턴을 설명하고 있다. 물론 객체지향 프로그래밍 설계자가 필요로 하는 디자인 지식을 23가지 패턴으로 모두 설명할 수는 없을 것이다. 하지만 "Gang of Four(GoF)" (Gamma et al.)에 소개된 패턴들은 디자인 패턴을 시작하는데 튼튼한 기초가 된다. 이러한 패턴들은 Smalltalk 개발환경에서 발견된 기반 클래스 라이브러리에 대한 설계수준의 내용이다. 패턴들이 모든 문제를 해결할 순 없지만, 실생활에서 나타나는 다양한 디자인 문제에 대한 해법(solution)으로 통합할 수 있는 유용한 구조를 찾고, 일반적으로 디자인 패턴을 학습하는 경우에 대한 기반을 제공한다. 이는 디자인 전문가 수준의 지식을 포함하고 있으며, 고급스러우면서 관리가 용이하고 확장이 가능한 객체지향 프로그램을 만드는데 필요한 기반을 제공한다. <br />
<br />
''Smalltalk Companion'' 에서는 이것을 패턴의 "base library"에 추가하지 않았다; 오히려 스몰토크 설계자와 프로그래머를 위해 제시하였고, 때로는 특별한 관점이 필요한 곳에서 패턴을 해석하고 확대시켰다. 우리의 목표는 '''디자인 패턴''' 책을 대체하는 것이 아니다; 본 저서는 [디자인 패턴]을 대신해 읽기보다는 함께 읽을 것을 권한다. GoF에서 이미 상세히 다룬 정보는 포함시키지 않고자 했다. 그 대신 내용을 자주 참조하고 있으니 여러분도 그러길 권한다.<br />
<br />
[[Category:DesignPatternSmalltalkCompanion]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=DesignPatternSmalltalkCompanion:AbstractFactory&diff=5595DesignPatternSmalltalkCompanion:AbstractFactory2018-07-27T06:44:49Z<p>Onionmixer: 검수 20180727</p>
<hr />
<div>==ABSTRACT FACTORY (DP 87)==<br />
<br />
객체생성<br />
<br />
===의도===<br />
<br />
관련 객체 또는 종속 객체군을 작성하기위한 인터페이스를 제공한다. 클라이언트는 구체적인 클래스를 지정하지 않고도 추상적인 방식으로 모든 제품군의 제품을 만들 수 있다.<br />
<br />
===구조===<br />
<br />
[[image:dpsc_chapter03_AbstractFactory_01.png]]<br />
<br />
===논의===<br />
<br />
역설적으로 느껴지겠지만, 때때로 패턴이 적용되는 상황이 패턴 해결책만큼이나 복잡한 경우가 발생되는데, 추상 팩토리(abstract factory)가 이런 경우에 해당된다. 패턴 자체는 사실 꽤 단순한데, 문제 컨텍스트<sup>problem context</sup>에 많은 부분이 포함되어 있다. 일단 어떤 상황에 추상 팩토리 패턴을 적용할 수 있는지 부터 살펴보도록 하자.<ref name="주석1">추상 팩토리 패턴 이후에 나오는 Builder (47) 패턴을 읽어보길 권한다; 두 패턴은 밀접한 관계가 있으며 여러 문제를 공유한다.</ref><br />
<br />
첫째, 여기에 컴포넌트 하위부품(subpart)부터 하나씩 시작해서 제품을 구축하는 목적을 가진 애플리케이션이 하나 있다. 이 애플리케이션을 이용해 차체, 엔진, 변속기, 차 내부로 나누어지는 자동차를 구현할 수 있다. 둘째, 이 애플리케이션에서는 단일 제품의 컴포넌트가 동일한 부품군, 즉 세트로 된 부품이어야 하며, 예를들어 Ford(Ford) 자동차의 경우는 Ford사에서 만든 엔진과 변속기를 필요로 한다. 이러한 부품은 Ford 군(계열)에 속한다고 할 수 있다. 셋째, 우리는 Ford 부품, Toyota(Toyota) 부품, 포르쉐(Porsche) 부품 등 여러 종류의 부품군을 가지고 있다. 해당 클래스는 클래스 계층구조(class hierarchy)를 통해 적절한 하위계층구조(subhierarchy)로 뻗어 나간다: 엔진은 CarEngine 하위계층구조에, 차체는 CarBody 계층구조에 속하게 된다. 따라서 우리는 이 애플리케이션이, (1) 하나의 부품군으로부터 자동차 컴포넌트를 쉽게 검색하고 부품군들 간의 오류를 허용하지 않으며(Toyota 엔진이 Ford 자동차에 사용되는 경우가 없도록), (2) 모든 부품군에 있어 통일된 부품 검색 코드를 사용할 수 있도록 하는 방법이 필요하다. 이런 경우에 추상 팩토리 패턴을 사용하면 두 가지 조건을 모두 충족시킬 수 있다.<br />
<br />
아래와 같은 자동차 클래스와 자동차 부품 클래스가 있다: <br />
<br />
[[image:dpsc_chapter03_AbstractFactory_02.png]]<br />
<br />
[[image:dpsc_chapter03_AbstractFactory_03.png]]<br />
<br />
Vehicle 과 CarPart 는 Object 의 하위클래스이다. 물론 이러한 클래스 구조는 여러모로 지나치게 단순화시킨 구조이다. Ford와 같은 자동차 회사에서는 자동차, 차체, 엔진, 심지어 엔진 유형까지 (예: 카솔린 구동식 또는 디젤 엔진) 여러 유형의 모델이 있다. 따라서 실 세계의 자동차 모델에는 여기서 나타낸 것보다 추상화 단계가 더 많다. 하지만 이 책의 패턴 설명은 최대한 단순하고 쉽게 관리하는 것을 목적으로 한다. <br />
<br />
먼저 패턴 구현을 CarPartFactory 라는 추상 팩토리 클래스를 정의하는것부터 시작하자. 클래스는 "구체적 클래스를 지정하지 않고 관련 객체군 또는 종속 객체군을 생성할 수 있는 인터페이스를 제공한다" ('의도' 단락 참조). 또한 클래스는 makeCar, makeEngine, makeBody와 같은 추상적 '''Product'''-creation 메서드를 정의한다. 메서드까지 정의하고 나면 사용자는 제품군마다 하나씩의 팩토리의 구체적 하위클래스를 정의한다. 각 하위클래스는 적절한 부품을 생성하고 반환(return)하기 위해 제품 생성 메서드를 재정의한다. 따라서 사용자는 Object 아래에 다음과 같이 새로운 하위계층구조를 추가한다.<br />
<br />
[[image:dpsc_chapter03_AbstractFactory_04.png]]<br />
<br />
부품 생성 메서드를 구현하기 위해 추상 팩토리 클래스부터 시작해야 하며,<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
CarPartFactory>>makeCar<br />
self subclassResponsibi1ity<br />
CarPartFactory>>makeEngine<br />
self subclassResponsibility<br />
CarPartFactory>>makeBody<br />
self subclassResponsibility<br />
</syntaxhighlight><br />
<br />
이후 이러한 메서드를 오버라이드(override)하는 구체적 하위클래스를 추가한다:<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
FordFactory>>makeCar<br />
^FordCar new<br />
<br />
FordFactory>>makeEngine<br />
^FordEngine new<br />
<br />
FordFactory>>makeBody<br />
^FordBody new<br />
<br />
ToyotaFactory>>makeCar<br />
^ToyotaCar new<br />
<br />
ToyotaFactory>>makeEngine<br />
^ToyotaEngine new<br />
<br />
ToyotaFactory>>makeBody<br />
^ToyotaBody new<br />
</syntaxhighlight><br />
<br />
만들어질 팩토리는 전체적으로 다음과 같은 모습이 된다.<br />
<br />
[[image:dpsc_chapter03_AbstractFactory_05.png]]<br />
<br />
추상 팩토리 패턴을 이용할때, 이 조각들을 조립하는 것은 팩토리 클라이언트에 달려 있다. 팩토리는 부품이 한 부품군에서 나오도록 보장하지만 부품을 반환하는 일만 하며, 최종 제품으로 조립하는 작업은 하지 않는다. 조립하는 작업은 클라이언트의 일이다.<ref name="주석2">이 점은 팩토리의 부품 자체가 복잡하게 구성된 부품일 경우에도 마찬가지다. 예를 들어, 팩토리는 여러 개의 하위컴포넌트 위젯들이 통합된 복합 판유리와 같이 미리 조립된 부품을 반환할 수도 있는 것이다. 그럼에도 불구하고, 팩토리 클라이언트에게는 이것이 창문처럼 좀 더 복잡한 제품으로 통합될 수 있는 하나의 부품으로 간주된다.</ref>(이것이 추상 팩토리와 Builder 패턴 (47) 사이의 주요 차이점이라는 것은 다음에 살펴보자.)<br />
<br />
CarAssembler 객체가 팩토리 클라이언트이고, 여기에 CarPartFactory 객체를 참조하는 factory 라는 이름의 인스턴스 변수가 하나 있다고 가정하자<ref name="주석3">언뜻 보면 조립 작업 과정의 최종 제품인 새 Car 를 생성하는 factory makeCar 를 말함으로써 자동차 조립 공정을 시작하기가 혼란스러워 보인다. 사실 이 방법은 복합 객체를 구축하는 전형적인 방법이다. 예를 들어 Visual Smalltalk 애서는 Menu 의 구성을 시작할 때 Menu new 라고 말한다. 이 시점에서 사용자가 가진 것은 메뉴의 껍데기(shell)에 불과하며, MenuItems 라는 하위컴포넌트를 추가하기 전까지는 제대로된 메뉴의 기능을 하지 못한다. 이와 비슷하게, factory makeCar 는 엔진이나 차체처럼 사용자가 추가해야 하는 컴포넌트로소 Car 의 빈 껍데기를 반환한다.</ref>.<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
CarAssembler>>assembler<br />
| car |<br />
"Create the top-level part, the car object which<br />
starts out having no subcomponents, and add and<br />
engine, body, etc."<br />
<br />
car := factory makeCar.<br />
car<br />
addEngine: factory makeEngine;<br />
addBody: factory makeBody;<br />
...<br />
^car<br />
</syntaxhighlight><br />
<br />
<br />
만약 factory 가 FordFactory 의 인스턴스라면 자동차에 추가되는 엔진은 FordEngine 이 되며, 만약 factory 가 ToyotaFactory 의 인스턴스였다면 factory makeEngine 에 의해 ToyotaEngine 이 생성되어 진행 중인 자동차에 추가된다.<br />
<br />
여전히 풀리지 않은 궁금증이 하나 있다. CarAssembler(팩토리 클라이언트) 는 어떻게 해서 CarPartFactory 의 특정 하위클래스의 인스턴스를 얻게될까? 소비자의 선택을 바탕으로 해서 스스로 특정 하위클래스를 인스턴스화하기도 하며, 외부 객체에 의해 factory 인스턴스를 전달받기도 한다. 그러나 두 경우 모두 자동차 및 구성요소의 하위부품을 생성하는 코드는 동일하게 존재한다. 즉, 모든 CarPartFactory 클래스는 다형적으로 동일한 메시징 프로토콜을 실행하기 때문에, factory 의 클라이언트는 어떤 유형의 factory 와 대화 중인지에 대해 신경 쓰지 않아도 된다. 그저 factory 의 프로토콜이 제공하는 일반 메시지를 전송할 뿐이다.<br />
<br />
다형성 덕분에 클라이언트는 다수의 조건문(conditional)<ref name="역자주1">타 언어의 if 등의 비교 조건문</ref>보다는 한 가지 버전의 코드만 구현해서, 여전히 어떤 종류의 자동차든 생산할 수 있다. 비교해보자면, 추상 팩토리 패턴이 없는경우 자동차 생성 코드는 다음과 같은 모습이 된다:<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
CarAssembler>>assembler<br />
"Without Abstract Factory."<br />
| car |<br />
car := (consumerChoice == #Ford<br />
ifTrue: [FordCar new]<br />
ifFalse: [consumerChoice ==#Toyota<br />
ifTrue: [ToyotaCar new]<br />
ifFalse: [consumerChoice == #Porsche<br />
ifTrue: [PorscheCar new]<br />
ifFalse: [...]).<br />
car addEngine:<br />
(consumerChoice == #Ford<br />
ifTrue: [FordEngine new]<br />
ifFalse: [...]).<br />
...<br />
^car<br />
</syntaxhighlight><br />
<br />
<br />
지금의 경우에서 CarAssembler 객체 내에서는 어떠한 종류의 자동차를 구축하고, 그 하위부품은 무엇이 될 것인지를 스스로 결정하며, 실제 부품의 인스턴스화를 실행한다. 하지만 추상 팩토리 해법은 CarAssembler 객체로부터 생기는 모든 행위를 factory 라는 하나의 구분된 행위로 추상화시킨다. CarAssembler 내부에서 사용할 객체를 특정 자동차에 대한 factory 객체로 구성한 뒤에, 자동차와 하위부품을 생산하려 할때 CarAssembler 객체는 취습하는 factory 객체로 원하는 부품의 종류와 상관없이 항상 동일한 메시지로 부품을 요청하게 된다.<br />
<br />
추상 팩토리 접근법은 좀 더 모듈식이며 쉽게 확장이 가능한 설계로 만든다. 시스템안에 두 가지 유형의 자동차를 추가하려고 할때, 복잡한 조건문 집합 내에서 코드상의 여러 위치에 새 부품을 추가하기 위해 CarAssembler>>buildCar 를 다시 찾는것 보다는, CarPartFactory의 새 하위클래스와 이를 인스턴스화시킬 코드만 존재하면 된다.<br />
<br />
여기서는 사실상 두 가지의 추상화가 이루어지고 있다. 첫째, 모든 CarPartFactory 의 객체들은 동일한 메시지 인터페이스를 구현한다. 이런 구현은 factory 클라이언트가 보내는 CarPartFactory 타입이 정확히 무엇인지 신경쓰지 않고 동일한 부품 생성 메시지를 보낼 수 있게 해준다. 둘째, ConcreteProducts(자동차 부품 클래스) 는 각 부품 하위계층구조의 추상 상위클래스(superclass)에서 정의된 것과 동일한 인터페이스를 구현하고 있다. 예를 들어, 모든 Car 들은 addEngine: 과 addBody: 메시지에 대해 어떻게 응답해야 하는지를 알고 있다. 또한 CarBody 객체들은 모두 color: 과 color 메시지들을 구현한다. 모든 CarEngine 들 또한 이와 마찬가지로 공통된 메시징 인터페이스를 제공한다. 다른 부품의 경우도 마찬가지다.<br />
<br />
이 상황을 보다 상세히 설명하기 위해서, [디자인 패턴]에서와 마찬가지로 factory 클라이언트가 외부<sup>outsider</sup>로부터 factory 객체를 전달받았다고 가정하자. 이는 CarAssembler 가 어떤 종류의 CarPartFactory 를 사용하는지 스스로 정확히 알지 못하고 있다는 점을 암시한다. 이를 위해 (1) 자신의 factory 객체를 참조하는 인스턴스 변수를 갖기 위해 factory 클라이언트인 CarAssembler 를 정의하던가, 또는 (2) factory 객체를 argument 로서 클라이언트의 자동차 생성 메서드로 전달함으로써 추상화를 구현해 낸다. 첫 번째 방법을 사용한다면, 지금의 경우에서는 CarAssembler 클래스 및, 클래스의 관련 메서드를 다음과 같이 정의해야할 것이다:<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
Object subclass: #CarAssembler<br />
instanseVariableNames: 'factory'<br />
classVariablesNames: ''<br />
poolDictionaries: ''<br />
<br />
CarAssembler>>factory: aCarPartFactory<br />
"setter method"<br />
factory := aCarPartFactory<br />
<br />
CarAssembler class>>using: aCarPartFactory<br />
"Instance creation method"<br />
^self new factory: aCarPartFactory<br />
</syntaxhighlight><br />
<br />
외부 객체, 즉 CarAssembler의 클라이언트는 적절한 factory 인스턴스를 이용해 CarAssembler 를 생성하여 초기화(initialize)할 것이다. CarAssembler 의 클라이언트가 대화형 3D 자동차 시각화 애플리케이션이라고 가정해 보자. 사용자는 유저 인터페이스에서 자동차를 선택할 수 있다; 애플리케이션은 이에 대한 응답으로 화면에 3차원 그래픽의 이미지를 만든다. 사용자는 자동차가 어떻게 생겼는지 살펴보고 인테리어, 엔진 또는 관심 부분을 "살펴보기"(가까이 가기, 내부 모습)위해 3D 공간을 탐색한다. 자동차는 화면 위의 사용자 버튼의 선택을 기반으로 구성된다고 가정하며, 프로그램의 사용자는 ''Ford, Toyota, Porsche'' 버튼 중 하나를 클릭해서 살펴보기 원하는 차를 선택할 수 있다. 이에 대한 응답으로 유저 인터페이스 코드는 다음과 같은 작업을 수행할 것이다<ref name="역자주2">다음의 코드에서 FordFactory 는 CarPartFactory 의 하위 클래스이다.</ref>.<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
CarAssemblerUI>>fordButtonClicked<br />
"The user clicked the 'Ford' button."<br />
| assembler car |<br />
assembler := CarAssembler using: FordFactory new.<br />
car := assembler assembleCar.<br />
"Now, draw the assembled car on the screen:"<br />
...<br />
</syntaxhighlight><br />
<br />
<br />
앞에서 논의했던 두번째 해법의 경우, CarAssembler 는 자신의 factory 객체를 참조하는 인스턴스 변수를 가지지 않는다. 그 대신에 클라이언트가 차를 조립하고 싶을때 factory 는 간단히 넘겨버려도 된다. 이런 과정 덕분에 assembleCar 메서드는 하나의 인수만 가지도록 변경된다.<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
CarAssembler>>assemblerCarUsing: aCarPartFactory<br />
| car |<br />
car = aCarPartFactory makeCar.<br />
car<br />
addEngine: aCarPartFactory makeEngine;<br />
addBody: aCarPartFactory makeBody;<br />
...<br />
^car<br />
</syntaxhighlight><br />
<br />
<br />
<br />
===구현과 예제 코드===<br />
<br />
이제 추상 팩토리 패턴의 주요 요소를 본인의 애플리케이션에 사용하기에 충분할 만큼 살펴보았다. 이번 단락에서는 추상 팩토리 주제에서 변형의 경우들을 이야기하려 하며, 특히 팩토리 객체를 구현하는 여러가지 방법을 논의하고자 한다. 이 변형 중 일부는 [디자인 패턴]에서 다루고 있고, 저자들은 Smalltalk 특정적인 다양성도 몇 가지 추가하였다. 여기서는 각 구현의 장단점을 지적하겠지만, 특별한 가지 해법을 옹호하려는 것은 아니다; 오히려 다른 모든 패턴들과 마찬가지로 다양한 방법으로 구현될 수 있음을 보이는 것이 목표다. 선택권은 특정 애플리케이션과 다른 객체들과의 상호작용으로 인해 부과되는 추가 제약이나 개인의 선호도 및 미학에 따라 좌우된다. 크리스토퍼 알렉산더(Christopher Alexander)와 그 동료들은, 패턴이 특정 문제에 대한 해법에 관련된 기본 견해와 개념을 제공하기는 하지만 "본인의 선호도와 지역적 상태에 대해 조정함으로써 자신만의 방식으로" 실현시킬 수 있다고 주장했다 (Alexander et al., 1977, p. xiii).<br />
<br />
<br />
<br />
====자동차 팩토리에 대한 바닐라 구현====<br />
<br />
예제로 보여 준 애플리케이션에서 CarPartfactory 는 모든 자동차 factory 에 대한 인터페이스를 정의한다. 대안 팩토리는 CarPartFactory 의 하위클래스로 정의되며, 각 대안 팩토리는 적절한 부품 생성 메서드를 오버라이드한다. 가장 간단히 말해 바닐라 구현에서 이러한 메서드 각각은 다음과 같이 인스턴스화하고 반환하기 위해서 클래스를 하드코딩(hard-code)한다. <br />
<br />
<syntaxhighlight lang="smalltalk"><br />
FordFactory>>makeEngine<br />
^FordEngine new<br />
<br />
PorscheFactory>>makeEngine<br />
^PorscheEngine new<br />
</syntaxhighlight><br />
<br />
<br />
이 접근법은 각 부품에 대한 코드를 지역화(localize)한다. 물론 메서드를 지역화하는 것은 나중에 변경된 내용을 지역화 하는 것을 의미한다. <br />
<br />
Toyota 가 Ford 로부터 엔진을 구매하기 시작한다고 가정하자(자동차 산업에서는 실제로 이렇게 희한한 일이 발생하고 있기도 한다!). 새로운 유형의 엔진은 FordEngine 인스턴스를 반환하는 ToyotaFactory>>makeEngine 메서드만 변경하는것을 의미하며, 클라이언트 코드는 변경되지 않은 채로 유지된다.<br />
<br />
<br />
<br />
====Constant Method 해법====<br />
<br />
앞에서 살펴본 구현은 가장 간단한 형태기는 하지만, 그 안에서 모든 factory 클래스의 각 부품에 맞는 각 메서드를 인스턴스화하기 위해 클래스를 하드코딩했다. 이 작업은 CarPartFactory 와 CarPartFactory 의 구체 하위클래스에 makeEngine, makeBody 등의 메서드를 필요로 한다. 이 방법 대신 Factory Method (63) 패턴의 Constant Method 변형 중 하나를 적용시킬 수도 있다. 여기에서는 각각의 부품생성 메서드를 한번만 정의해서 필요한 클래스 객체를 쉽게 인스턴스화 할 수 있도록 Constant Method 를 이용해서 각 factory 클래스가 부품 클래스의 "이름을 만들게" 할 것이다(Beck, 1997).<br />
<br />
추상적 상위클래스에서 부품 생성 메서드를 factory 메서드로 재정의함으로써 시작한다:<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
CarPartFactory>>makeCar<br />
^self carClass new<br />
<br />
CarPartFactory>>makeEngine<br />
^self engineClass new<br />
<br />
CarPartFactory>>makeBody<br />
^self bodyClass new<br />
</syntaxhighlight><br />
<br />
그 뒤에 각 factory 클래스에 Constant Method 를 정의한다. <br />
<br />
<syntaxhighlight lang="smalltalk"><br />
FordFactory>>carClass<br />
^FordCar<br />
<br />
FordFactory>>engineClass<br />
^FordEngine<br />
<br />
FordFactory>>bodyCalss<br />
^FordBody<br />
<br />
PorscheFactory>>engineClass<br />
^PorscheEngine<br />
<br />
PorscheFactory>>bodyClass<br />
^PorscheBody<br />
</syntaxhighlight><br />
<br />
이제 코드를 모듈화시켰으므로 새로운 자동차 팩토리 클래스가 정의되면 그에 상응하는 부품 클래스만 이름을 정하면 된다. 그러나 자동차 부품 예제의 경우 이것과 바닐라 구현 간의 차이는 그다지 크지 않다.<br />
<br />
<br />
<br />
====부품 카탈로그 해법====<br />
<br />
앞에서 설명된 구현들의 경우, 자동차의 각 부품은 factory 클래스 내의 고유 메서드를 필요로 한다 (예: makeBody, makeEngine). 이는 fatory 내에 메서드의 수를 넘치게 생성하도록 유도하기도 한다. 자동차에 새로운 유형의 부품, 즉 새로운 유형의 고급 오디오 시스템을 추가하려면 CarPartFactory 와 그에 해당하는 모든 하위클래스에 새로운 makeDeluxeCDAudioSystem 메서드를 추가해야 할 것이며, 이런 작업을 통해 factory 클라이언트가 알아야 하는 인터페이스를 확장할 수 있을 것이다.<br />
<br />
[디자인 패턴] 에서는 Smalltalk 에서 클래스는 일급 객체<sup>first-class</sup><ref name="일급 객체">https://ko.wikipedia.org/wiki/일급_객체</ref>라는 사실을 이용해서 이 문제를 (DP90과 그 다음 문제) 해결하는 방법을 설명하고 있다. 자동차 부품의 클래스를 "부품 카탈로그<sup>part catalog</sup>"로 저장하고 부품마다 하나씩 메서드를 만드는 것이 아니라 파라미터화된 유일한 부품 생성 메서드를 가진 CarPartFactory 를 구현한다.<br />
<br />
이러한 접근법은 CarPartFactory 클래스에 partCatalog 인스턴스 변수를 추가하고 이것을 Dictionaryㅡ여기서 키<sup>key</sup>는 부품 유형이 (Symbol과 같은) 되며 해당 값<sup>value</sup>은 적절한 클래스가 된다ㅡ로서 초기화하는 과정을 동반한다. FordFactory 클래스의 partCatalog 는 다음과 같다:<br />
<br />
{| style="border: 1px solid black;"<br />
|- style="color: white; background-color: black;"<br />
|'''Key'''||'''Value'''<br />
|-<br />
|#Car||<the FordCar class object><br />
|-<br />
|#Engine||<the FornEngine class Object><br />
|-<br />
|#body||<the FordBody class object><br />
|-<br />
|...||...<br />
|}<br />
<br />
<br />
이 방식의 접근법을 가능하게 하는 새로운 CarPartFactory 클래스 정의와 메서드를 다음 내용에서 소개한다:<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
Object subclass: #CarPartFactory<br />
instanceVariableNames: 'PartCatalog'<br />
classVariablesNames: ''<br />
poolDictionaries: ''<br />
<br />
CarPartFactory class>>new<br />
^self basicNew initialize<br />
<br />
CarPartFactory>>initialize <br />
partCatalog := Dictionary new<br />
</syntaxhighlight><br />
<br />
<br />
하위클래스는 고유의 부품 카탈로그 버전을 구축하기 위해 추상적 상위클래스에서 선언된 initialize 메서드를 다음과 같이 오버라이드하게될 것이다:<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
FordFactory>>initialize <br />
super initialize.<br />
partCatalog<br />
at: #car put: Fordcar;<br />
at: #body put: FordBody;<br />
at: #engine put: FordEngine;<br />
...<br />
^self<br />
<br />
PorscheFactory>>initialize <br />
super initialize.<br />
partCatalog<br />
at: #car put: PorscheCar;<br />
at: #body put: PorscheBody;<br />
at: #engine put: PorscheEngine;<br />
...<br />
^self<br />
</syntaxhighlight><br />
<br />
<br />
이제 추상 상위클래스에 부품 생성을 위해 값(객체)를 반환하기 위한 단일 메서드(single method)를 정의한다. 이는 부품의 유형<sup>part type</sup>을 argument 로 받는다.<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
CarPartFactory>>make: partType<br />
"Create a new part based on partType."<br />
| partClass |<br />
partClass := partCatalog at: partType ifAbsent: [^nil].<br />
^partClass new<br />
</syntaxhighlight><br />
<br />
<br />
이제 자동차의 factory 클라이언트는 (예: CarAssembler) 이렇게 만들어진 단일 메시지를 이용해 모든 부품을 생성해 낼수 있다. 부품 생성 코드는 다음과 같은 모습이 아니라,<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
anAutoFactory make: #engine.<br />
anAutoFactory make: #body.<br />
</syntaxhighlight><br />
<br />
다음과 같을 것이다.<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
anAutoFactory makeEngine.<br />
anAutoFactory makeBody.<br />
</syntaxhighlight><br />
<br />
부품 카탈로그 접근법을 사용하면, CarPartFactory 계층구조의 구체적 클래스는 고유한 부품 카탈로그를 초기화시키는 단일 메서드만 정의하면 된다. 그 외의 부품 생성 동작은 CarPartFactory 의 추상 클래스에 정의된다.<br />
<br />
<br />
<br />
====다른 부품 카탈로그를 위한 또 다른 구현====<br />
<br />
"부품 카탈로그" 접근법을 고려하는 경우, 다른형태의 factory 를 구현하는 또 다른 방법이 있다. 이 경우에도 대체 factory 제공을 위해 서브클래싱(subclassing)을 사용하기 때문에 FordFactory, ToyotaFactory, PorscheFactory 중 하나를 클라이언트에서 선택할 필요가 있다. 하지만 부품 카탈로그는 각 factory 클래스의 모든 인스턴스에서 동일해야 하기 때문에 partCatalog 를 상위클래스내에서 인스턴스 변수로 정의하고, 클래스에서 선언된 변수를 적절히 초기화하기 위해, 각각의 factory 하위클래스에서 사용될 클래스 메서드를 코딩하며, 각 하위클래스마다 초기화 메서드를 한번씩 호출하도록 할 수 있다.<br />
<br />
여기서 추상클래스는 다음과 같이 재정의한다:<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
object subclass: #carPartFactory<br />
instanceValiableNames: ''<br />
classVariablesNames: ''<br />
poolDictionaries: ''<br />
<br />
CarpartFactory class<br />
instanceVariableNames: 'partCatalog'<br />
<br />
CarPartFactory class>>make: partType<br />
"We moved this method; it's a class method now."<br />
| partClass |<br />
partClass := partCatalog at: partType ifAbsent: [^nil].<br />
^partClass new<br />
</syntaxhighlight><br />
<br />
<br />
이제 CarPartFactory의 구체적 하위클래스마다 하나씩의 클래스 인스턴스 변수가 있다. 모든 하위클래스가 내용을 공유하는 클래스 변수와는 달리, 각 하위클래스는 해당 클래스 인스턴스 변수에 대해 고유의 private 버전을 가지게 된다. 따라서 FordFactory 의 partCatalog 를 초기화한다고 해도 ToyotaFactory 의 partCatalog 에는 영향을 미치지 않는다. 각 클래스의 카탈로그는 다음과 같이 초기화할 수 있다:<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
CarPartFactory class>>new<br />
partCatalog isNil ifTrue: [self initialize].<br />
^self basicNew<br />
<br />
CarPartFactory class>>initialize<br />
"Initialize the part catalog. This is now a class method"<br />
partCatalog := Dictionary new<br />
<br />
FordFactory class>>initialize<br />
"Initialize the *local* part catalog"<br />
super initialize <br />
partCatalog<br />
at: #car put: FordCar;<br />
at: #bodyt put: FordBody;<br />
at: #engine put: FordEngine;<br />
...<br />
</syntaxhighlight><br />
<br />
<br />
이제 남은 일은 인스턴스 변수 대신에 각 하위클래스 고유의 클래스 인스턴스 변수를 사용하기 위해 부품 생성 인스턴스의 단일 메서드를 재정의하는 것이다. 이 메서드가 하는 일은 동일한 서명(signature)으로 클래스 메서드를 호출하는것 뿐이다:<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
CarPartFactory>>make: partType<br />
"Create a new part based on partTyp"<br />
^self class make: partType<br />
</syntaxhighlight><br />
<br />
<br />
그럼에도 불구하고 DP 90 페이지에서 지적한 바와 같이 이 접근법은 제품군마다 새로운 구체 factory 하위클래스를 필요로 한다 (Fords 에 하나, Toyotas 에 하나). 메시징 인터페이스를 단일 메서드로 줄이긴 했지만 팩토리 클라이언트들은 여전히 여러 개의 팩토리 클래스들 중 인스턴스화해야 할 클래스를 선택해야 한다. 다만 유연성은 약간 감소될 수 있다; 새로운 제품군이 생길 때 새로운 하위클래스를 추가하는 것은 확실히 간단한 작업이지만, 그럼에도 좀 더 작은 단일-클래스 구현을 만들 수도 있다.<br />
<br />
<br />
<br />
====단일 factory 클래스====<br />
<br />
단일-클래스 해법은 하나의 factory 클래스만 구현해서, 인스턴스화하기 적절한 부품 클래스를 이용해 새로 생성되는 각 인스턴스의 부품 카탈로그를 초기화하는 것이다(partCatalog 를 인스턴스 변수로 사용하는 작업으로 돌아왔다). 다시 말해, CarPartFactory 는 어떠한 하위클래스도 가지지 않으며, 클라이언트들은 항상 CarPartFactory 자체를 인스턴스화한다.<br />
<br />
이 접근법에서는 프로그래머가 CarPartFactory 안에 단순한 클래스 메서드를 구현해야 하며, 각 메서드마다 의미를 지니는 factory 생성 이름을 가진다. 단일 factory 의 각 메서드는 팩토리의 유형마다 필요한 초기화를 수행한다. 이는 다음과 같은 형태를 가진다: <br />
<br />
<syntaxhighlight lang="smalltalk"><br />
CarPartFactory class>>fordFactory<br />
"Create and return a new Ford factory"<br />
| catalog |<br />
catalog := Dictionary new.<br />
catalog<br />
at: #car put: FordCar;<br />
at: #engine put: FordEngine;<br />
...<br />
^self new partCatalog: catalog<br />
<br />
CarPartFactory class>>porscheFactory<br />
"Create and return a new Porsche factory"<br />
| catalog |<br />
catalog := Dictionary new.<br />
catalog<br />
at: #car put: PorscheCar;<br />
at: #engine put: Porscheengine;<br />
...<br />
^self new partCatalog: catalog<br />
</syntaxhighlight><br />
<br />
CarPartFactory 클래스를 사용할 클라이언트가 Ford factory 를 생성하고자 할 경우 다음과 같은 메시지 전송이 이루어진다:<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
carFactory := CarPartFactory fordFactory.<br />
</syntaxhighlight><br />
<br />
<br />
부품은 정확히 이전과 동일하게 생성된다:<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
carFactory make: #engine.<br />
</syntaxhighlight><br />
<br />
<br />
<br />
====교묘한 단일-클래스 구현====<br />
<br />
사용자는 (1) 일관된 명명규칙에 따라 우리의 모든 클래스를 정의하였고, (2) Smalltalk 는 반영적 특성<sup>reflective</sup>을 가진다, 라는 두 가지 사실을 이용하는 접근법을 통해 또 다른 단일-클래스 팩토리를 구현할 수 있다. 모든 부품 클래스 이름은 자동차 제작회사의 이름을 접두사로 하고 부품 이름을 접미사로 한다: FordEngine, ToyotaEngine, PorscheEngine; FordCar, ToyotaCar, PorscheCar. 이 규칙을 다음과 같이 이용할 수 있다:<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
CarPartFactory>>makeCar: manufacturersName<br />
"manufacturesName is a Symbol, such as<br />
#Ford, #Toyota, or #Porsche."<br />
| carClass |<br />
carClass := Smalltalk <br />
at: (manufacturersName, #car) asSymbol<br />
ifAbsent: [^nil].<br />
^carClass new<br />
<br />
carPartFactory>>makeEngine: manufacturersName<br />
| engineClass |<br />
engineClass := Smalltalk <br />
at: (manufacturersName, #Engine) asSymbol<br />
ifAbsent: [^nil].<br />
^engineClass new<br />
</syntaxhighlight><br />
<br />
<br />
어떻게 이런 방식이 가능하나면, 모든 전역변수는 Smalltalk Dictionary 의 항목으로 참조되며, 클래스는 클래스와 동일한 이름의 global 로 참조되기 때문이다. 따라서 Smalltalk at: #FordEngine이라는 표현식(expression)은 FordEngine 클래스 객체를 가져온다(retrieves).<br />
<br />
이 접근법을 이용해 자동차 factory 는 항상 CarPartFactory 의 인스턴스가 된다. 자동차 부품 생성은 다음 형식을 취한다 (여기서 carCompany 는 사용자로부터 얻은 Symbol이다):<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
carFactory := CarPartFactory new.<br />
car := carFactory makeCar: carCompany.<br />
car<br />
addEngine: (carFactory makeEngine: carCompany);<br />
...<br />
</syntaxhighlight><br />
<br />
<br />
그러나 방법에는 단점이 있다. 정적 검사(static inspection)또는 동적 런타임 추적(dynamic runtime trace)으로부터 코드를 이해야기가 힘든데, 코드는 참조로 만들어지며 해당 identity 는 즉석에서 생성된 클래스로 메시지를 보내기 때문이다. 예를 들어, FordCar 클래스를 참조하는 메서드를 모두 찾기 위해 Smalltalk 개발환경의 도구를 사용한다면, makeCar: 메서드는 검색 결과로 나타나는 메서드 목록에 포함되지 않을 것이다. makeCar: 는 모든 자동차 클래스를 암묵적으로 참조하지만 코드에 명시적으로 표현되어 있지는 않기 때문이디.<br />
<br />
<br />
<br />
===알려진 Smalltalk 사용예===<br />
<br />
====UILookPolicy====<br />
<br />
VisualWorks 에서 UIBuilder 는 위젯의 특징들을 바탕으로 윈도우를 구성하는 책임을 가진다 (다른 기능들도 있지만). 각 위젯마다 UIBuilder 는 실제 위젯 인스턴스 생성을 수행하기 위해 그와 관련된 UILookPolicy 를 요청한다. UILookPolicy 의 서로 다른 하위클래스들이 최근 선택된 룩앤필(look and feel)에 따라 (윈도우, OS/2, Motif, Macintosh 또는 기본 Smalltalk 모양) 서로 다른 위젯을 인스턴스화시키기 위해 이러한 생성 메시지를 다형적으로 구현한다. 하지만 UIBuilder 는 그것이 다루는 모양 정책 객체(look policy object)가 무엇인지에 대한 지식이 없다. 다시 말해 UIBuilder 는 Win3LookPolicy, CUALookPolicy 또는 다른 정책 객체(policy object)와 연결되어 있는지 알지 못하거나 신경쓰지 않는다는 의미이며, 단지 추상적 UILookPolicy 프로토콜에 정의된 일반적 위젯 생성 메시지를 전송할 뿐이다. 따라서 모양 정책 객체는 위젯에 대한 추상 팩토리 역할을 한다.<br />
<br />
<br />
<br />
====Constant Method 이용하기====<br />
<br />
Constant 메서드를 사용해서 추상 팩토리를 구현하는 방법은 이미 소개한 바 있다. UILookPolicy 에 의한 사용 예제도 제시되었다 (팩토리 메서드에서도 이 예제를 취급한다). 예를 들어, UILookPolicy 에 슬라이더 위젯을 생성하는 메서드가 있다고 가정하자. <br />
<br />
<syntaxhighlight lang="smalltalk"><br />
slider: spec into: builder<br />
...<br />
component := self sliderClass model: model.<br />
...<br />
</syntaxhighlight><br />
<br />
<br />
model: 메시지는 sliderClass가 반환시킨 클래스를 인스턴스화한다. UILookPolicy 는 sliderClass 의 기본(default) 버전을 정의하며, 그의 구체 하위클래스 일부는 슬라이드를 각각 다른 모양으로 생성하기 위해서 그것을 오버라이드한다:<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
UILookPolicy>>sliderClass<br />
^SliderView<br />
<br />
MacLookPolicy>>sliderClass<br />
^MacSliderView<br />
<br />
Win3LookPolicy>>sliderClass<br />
^Win3SliderView<br />
</syntaxhighlight><br />
<br />
<br />
<br />
===관련 패턴===<br />
<br />
====Builder====<br />
<br />
추상 팩토리는 Builder(47) 패턴과 밀접하게 관련되어 있다. 주요 차이점은 전체적인 제품 조립을 누가 책임지는가에서 온다. 추상 팩토리에서는 함께 작업하거나 함께 작동하도록 보장된 부품을 팩토리가 반환한다. 하지만 하나의 최종 제품으로 조립하는 일은 factory 클라이언트가 한다. factory 로 전송되는 모든 메시지는 전체적인 최종 제품의 하위컴포넌트 또는 새로운 부품등의 결과를 가져오며ㅡ즉 factory 안의 모든 부품 생성 메서드는 컴포넌트 부품을 반환한다는 의미ㅡfactory 의 클라이언트는 이 부품을 각각 생성되는 제품에 추가한다. 반면, Builder 는 제품이 조립되는 동안 제품을 유지할 수 있는 고유의 내부 상태를 가진다. Builder 가 "컴포넌트 A를 추가한다," "컴포넌트 B를 추가한다,"라는 요청을 듣게 되면 이 하위컴포넌트를 캡슐화된 Product 로 추가하는 것은 Builder 의 일이 된다. "컴포넌트 X를 추가하라" 라는 메시지가 전송될때에도 Builder 는 아무 것도 반환하지 않는다. Builder 의 클라이언트가 하위부품의 추가작업을 완료하면 클라이언트가 Builder에게 "최종 제품을 보여달라,"고 말한다. 그때서야 Builder는 최종 제품을 반환한다.<br />
<br />
<br />
<br />
====Factory Method====<br />
<br />
factory 객체는 어떤 클래스를 인스턴스화할지 결정하기 위해 팩토리 메서드(63) 패턴의 Constant 메서드 변수를 호출할 수 있음을 앞에서 알아봤다. 그리고 팩토리 메서드는 실질적으로 추상 팩토리의 경쟁자라고 볼 수 있지만, 두 패턴은 어떻게 보면 서로 구조적으로 정반대에 해당한다. 추상 팩토리의 바닐라 구현을 살펴본 본문에서 다수의 부품군ㅡ제품군마다 하나의 클래스ㅡ를 고려하기 위해 다수의 구체 factory 클래스를 정의하고 있다. 그 뒤에 단일 애플리케이션 객체가 동일한 코드를 이용해 어떤 제품군으로부터든 부품을 인스턴스화시킬 수 있도록 하였다. 애플리케이션은 바람직한 부품군과 관련된 factory 클래스를 인스턴스화시킬 뿐이다 (Ford 부품을 원하면 FordFactory 를 인스턴스화하는 것이다). 구조 다이어그램에서 단일 클라이언트는 factory 클래스의 하위계층구조로부터 하나의 factory 객체를 가리킨다는 점을 명심한다.<br />
<br />
팩토리 메서드 접근법을 이용할 경우 factory 클래스의 전체 계층구조를 정의하는 작업을 피할 수 있다. 하지만 그 대신 애플리케이션 클래스의 계층구조가 필요할 것이다. 모든 구체적 하위클래스에 대한 엄청난 양의 동작을 정의하면서도 factory 메서드의 정의는 하위클래스로 미루는 추상 애플리케이션 클래스가 필요할 것이다. 각 하위클래스는 그에 따라 factory 메서드를 오버라이드할 것이다: Ford 애플리케이션 클래스는 해당되는 factory 메서드가 Ford 부품을 반환하도록 시킬 것이며, Toyota 와 관련된 애플리케이션 클래스는 Toyota 부품 객체를 생성 및 반환하는 factory 메서드가 있을 것이다.<br />
<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:DesignPatternSmalltalkCompanion]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=Smalltalk_Translation_Dictionary&diff=5594Smalltalk Translation Dictionary2018-07-26T07:11:03Z<p>Onionmixer: /* DPSC(Design Patterns Smalltalk Companion) */</p>
<hr />
<div>;Smalltalk 번역 용어사전<br />
<br />
==Preface==<br />
<br />
이 문서는 Smalltalk 관련 문서를 번역 및 검수할 때 기준으로 사용하기 위한 단어들을 정리해둔 것입니다. 기준은 다음과 같습니다<br />
<br />
# 범용적으로 사용되는 의미는 당연히 번역해야 하지만, Smalltalk 에서 고유하게 사용되는 단어의 경우는 원문 영어를 그대로 유지한다<br />
#* workspace, transcript, Inspector 등<br />
# 다른 책에서 Object orientation 에 대해 기 언급된 내용은 가능하면 따라가는 것으로 한다.<br />
#* Object > 객체 등<br />
# 번역이 애매한 고유명사급의 단어는 원문 영어를 그대로 유지한다<br />
#* Pane 등<br />
# 고유한 느낌이 없으나, 영어로 읽어서 직관적으로 알기 쉬운 내용은 한글 발음 표기로 한다.<br />
#* 시스템 브라우저등<br />
# 타 환경(Programming environment) 에서 번역했으나 Smalltalk 고유로 별도의 의미를 가지는 경우는 원문 영어를 그대로 유지한다<br />
#* Primitive<br />
<br />
<br />
==Common Trans Dictionary==<br />
<br />
* 스몰토크 > Smalltalk<br />
* 작업공간 > Workspace<br />
* 워크스페이스 > Workspace<br />
* 프리미티브 > Primitive<br />
* 기본 메서드(Primitive Method) > Primitive 메서드<br />
* 트랜스크립트 > Transcript<br />
* 인스펙터 > Inspector<br />
* 탐색기 > Inspector<br />
* 시스템 브라우저 > System Browser<br />
* 수퍼클래스 > 상위클래스<br />
* super 클래스 > 상위클래스<br />
* 서브클래스 > 하위클래스<br />
* subclass > 하위클래스<br />
* 오브젝트 > 객체<br />
* 패널 > pane (경우에 따라)<br />
* 패널, 창 > pane(경우에 따라)<br />
* Object > 객체<br />
* Method > 메서드<br />
* 메소드 > 메서드<br />
* pretty-print(er) > 다듬어서 출력<br />
* expressions > 프로그램식<br />
* 프로그램식 > 그대로 프로그램식 으로 사용<br />
* dialects > 방언<br />
* 구체클래스 > 실체 클래스<br />
* 컴포넌트 > 구성 요소<br />
* 폰트 > 글꼴(경우에 따라)<br />
* 추상적 상위클래스 > 추상 상위 클래스<br />
* 메소드 > 메서드<br />
* 집합체 > Collection<br />
* 클래스측 또는 클래스측면 > class side<br />
* 가지 > 브랜치(원문 확인 후 적용)<br />
* 강력한 타이핑 > 강타입<br />
* 동적으로 타이핑 > 동적 타이핑<br />
* 쓰레기수집기 > 가비지 콜렉터<br />
* Accessor > 접근자<br />
* 리포지토리 > 저장소<br />
* return, 리턴 > 반환 - 고유명사가 아닌경우<br />
* 인쇄 > 출력<br />
* 툴 > 도구<br />
* 컨트롤러 > Controller(경우에따라)<br />
* 서브타입 > 하위타입<br />
* 수퍼타입 > 상위타입<br />
* life-cycle > 생존주기<br />
* 라이프사이클 > 생존주기<br />
* 재팩토링 > 리팩토링<br />
* 어플리케이션 > 응용프로그램<br />
* 포맷터 > 형식자(formatter)<br />
* 사각 괄호 > 꺾쇠 괄호<br />
* 버저닝 > 버전 관리<br />
* 개발툴 > 개발 도구<br />
* 구체클래스 > 실체 클래스<br />
* 리스트 > 목록<br />
* 스냅숏 > 스냅샷<br />
* 유추 > 추측<br />
* 에디터 > 편집기?<br />
* 플러그인가능성 > 연결가능성(pluggability)<br />
* 스택 > 스택(Stack)<br />
* 이름 공간 > namespace<br />
* message(객체지향) > 메시지<br />
* 이벤트 위주(event-driven) > 이벤트 주도<br />
* 셀렉터 > 선택자<br />
* selector > 선택자<br />
<br />
<br />
<br />
===Common Trans Dictionary Additional===<br />
<br />
* 어휘(vocabulary) > 용어<br />
* 개관(overview) > 개요<br />
* 조작(manipulation) > 취급<br />
* describes(설명) > 서술/만든다.나타낸다<br />
* associate,연관,관련 > 조합<br />
* properties > 속성 - 고유명사 아닌경우<br />
* destructuring > 구조분해, 비구조화<br />
* assignment > 할당<br />
* type > 유형 - 고유명사가 아닌경우<br />
<br />
==Squeak By Example==<br />
<br />
* 도구 플랩 > Tools Flap<br />
* 몬테첼로 > 몬티첼로<br />
* 익명함수 > 이름을 가지지 않는 함수<br />
* Morphic halo > 모픽할로(Morphic halo)<br />
* directory > 디렉터리 (국어사전 참조)<br />
* inspect > 검사하다<br />
* logic > 논리<br />
* round > (11장에서) 반올림.<br />
* user > 사용자(o)<br />
<br />
<br />
<br />
==DPSC(Design Patterns Smalltalk Companion)==<br />
<br />
* 셸 > 쉘? 껍데기? 외형?<br />
* 바닐라 > 순수 (vanilla 에 대한 별도의 해석을 주석으로 넣자. not modified)<br />
* 컨텍스트 > 단어의 경우는 "문맥" 이지만 고유명사로 쓰일때는 컨텍스트 라고 쓰자<br />
* 팩토리 메서드 > Factory Method<br />
* 느린초기화(lazy) > 지연초기화<br />
* 프로그래밍의 미 > 프로그래밍의 아름다움<br />
* 루트 뷰 > 최상위 view<br />
* 빌딩 > 빌딩(building)<br />
* 싱글톤 > singleton<br />
* 오퍼레이션 > 동작<br />
* 어댑터(패턴을 의미할때) > Adapter<br />
* 매핑 > 대입<br />
* 모델링 도메인 > 도메인 모델링<br />
* 비주얼 > 시각요소?<br />
* 브로커 > Broker? 중개? 중개인?<br />
* 대체할 수 있는 어댑터 > 대체 가능 어댑터<br />
* 선택기(원문단어 확인필요) > 셀렉터 ? Selector ? 선택자?<br />
* 애플리케이션 모델 > 애플리케이션 model<br />
* 스트로크 > 한선긋기<br />
* 행위(원문단어 체크필요) > behavior<br />
* 비주얼웍스 > Visual Works<br />
* 비주얼 스몰토크 > Visual Smalltalk<br />
* 국지화(locallize) > 지역화<br />
<br />
==Smalltalk Objects and Design==<br />
<br />
* 갈림 > 분기<br />
* 디자인 > 설계(경우에따라)<br />
* 트랜잭션로그 > 거래기록<br />
* recursion > 순환호출<br />
* 뷰 > View(경우에따라)<br />
* 모델 > Model(경우에따라)<br />
* 방송 > broadcast(경우에따라)<br />
* 솔리테르 > 솔리테어<br />
* 액션 > 행동 또는 동작(경우에따라)<br />
* 동형이성 > 동이형성<br />
* description 또는 기술 > 서술<br />
* 모핑 또는 morph > 변형<br />
* 조회(query) > 질의<br />
* business > 비즈니스<br />
<br />
<br />
<br />
==Deep into Pharo ESUG 2013==<br />
<br />
* zero 설정 > zero configuration? 최초설정?<br />
* 리턴 > 반환<br />
* resolve > 해결? 의결?<br />
* 서빙 > 제공<br />
* 트리거 > 계기 또는 꾸미다 또는 방아쇠<br />
* timeout > 시간만료<br />
* 시퀀스 > 장면 또는 순서<br />
* 개인 설정 > 미정(원문확인필요)<br />
* 맞춤 설정 > 미정(원문확인필요)<br />
* pragma > 컴파일러 지시문(pragma)<br />
** https://rmod.inria.fr/archives/papers/Duca16a-Pragmas-IWST.pdf<br />
** http://mousevm.tistory.com/28<br />
* 디자인 > 설계(경우에따라)<br />
* 매치(정규식) > 대입? 일치?<br />
* 문자열 > string(경우에따라)<br />
* 스트림 > stream(경우에따라)<br />
* 로그 > 기록(경우에따라)<br />
* 액션 클릭 > 의미 조사 필요<br />
* 셋, 세트 > 설정하다<br />
* 국부적 > 의미 조사 필요, 원문 조사 필요<br />
* 릴리즈, release > 출시? 놓다?<br />
* dispatch > 급파? 발송?<br />
* 액션 > 행동? 동작?<br />
* 루프 > 반복? 고리?<br />
* 언와인딩 > 풀림? 풀기?<br />
<br />
<br />
<br />
==The Art and Science of Smalltalk==<br />
<br />
* 스몰토크의 예술과 과학 > "The Art and Science of Smalltalk"<br />
* 루링 > 반복? 고리?<br />
* snippet > 조각? 토막?<br />
* templete > 템플릿<br />
* 업데이트 > 갱신(경우에따라)<br />
* 풀 > pool<br />
* 컬렉션 > Collection(경우에따라)<br />
* 체이닝 > 연쇄<br />
* 어댑터 > adapter<br />
* 에선 > 에서는<br />
* 하곤 > 하고는<br />
* 하길 > 하기를<br />
* 하여 > 해서(경우에따라)<br />
* 들어보겠다 > 들어보자<br />
* 통지자 > notifier(원문확인필요)<br />
* 플러그인가능성 > 연결가능성(pluggability)<br />
<br />
<br />
<br />
==Smalltalk Best Practice Patterns==<br />
<br />
* 요소 > element(경우에따라)<br />
* 매핑 > 대입<br />
* 명칭 공간 > namespace(원문확인 필요)<br />
* 명명 > 이름짓기</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=Smalltalk_Translation_Dictionary&diff=5593Smalltalk Translation Dictionary2018-07-25T06:43:56Z<p>Onionmixer: /* DPSC(Design Patterns Smalltalk Companion) */</p>
<hr />
<div>;Smalltalk 번역 용어사전<br />
<br />
==Preface==<br />
<br />
이 문서는 Smalltalk 관련 문서를 번역 및 검수할 때 기준으로 사용하기 위한 단어들을 정리해둔 것입니다. 기준은 다음과 같습니다<br />
<br />
# 범용적으로 사용되는 의미는 당연히 번역해야 하지만, Smalltalk 에서 고유하게 사용되는 단어의 경우는 원문 영어를 그대로 유지한다<br />
#* workspace, transcript, Inspector 등<br />
# 다른 책에서 Object orientation 에 대해 기 언급된 내용은 가능하면 따라가는 것으로 한다.<br />
#* Object > 객체 등<br />
# 번역이 애매한 고유명사급의 단어는 원문 영어를 그대로 유지한다<br />
#* Pane 등<br />
# 고유한 느낌이 없으나, 영어로 읽어서 직관적으로 알기 쉬운 내용은 한글 발음 표기로 한다.<br />
#* 시스템 브라우저등<br />
# 타 환경(Programming environment) 에서 번역했으나 Smalltalk 고유로 별도의 의미를 가지는 경우는 원문 영어를 그대로 유지한다<br />
#* Primitive<br />
<br />
<br />
==Common Trans Dictionary==<br />
<br />
* 스몰토크 > Smalltalk<br />
* 작업공간 > Workspace<br />
* 워크스페이스 > Workspace<br />
* 프리미티브 > Primitive<br />
* 기본 메서드(Primitive Method) > Primitive 메서드<br />
* 트랜스크립트 > Transcript<br />
* 인스펙터 > Inspector<br />
* 탐색기 > Inspector<br />
* 시스템 브라우저 > System Browser<br />
* 수퍼클래스 > 상위클래스<br />
* super 클래스 > 상위클래스<br />
* 서브클래스 > 하위클래스<br />
* subclass > 하위클래스<br />
* 오브젝트 > 객체<br />
* 패널 > pane (경우에 따라)<br />
* 패널, 창 > pane(경우에 따라)<br />
* Object > 객체<br />
* Method > 메서드<br />
* 메소드 > 메서드<br />
* pretty-print(er) > 다듬어서 출력<br />
* expressions > 프로그램식<br />
* 프로그램식 > 그대로 프로그램식 으로 사용<br />
* dialects > 방언<br />
* 구체클래스 > 실체 클래스<br />
* 컴포넌트 > 구성 요소<br />
* 폰트 > 글꼴(경우에 따라)<br />
* 추상적 상위클래스 > 추상 상위 클래스<br />
* 메소드 > 메서드<br />
* 집합체 > Collection<br />
* 클래스측 또는 클래스측면 > class side<br />
* 가지 > 브랜치(원문 확인 후 적용)<br />
* 강력한 타이핑 > 강타입<br />
* 동적으로 타이핑 > 동적 타이핑<br />
* 쓰레기수집기 > 가비지 콜렉터<br />
* Accessor > 접근자<br />
* 리포지토리 > 저장소<br />
* return, 리턴 > 반환 - 고유명사가 아닌경우<br />
* 인쇄 > 출력<br />
* 툴 > 도구<br />
* 컨트롤러 > Controller(경우에따라)<br />
* 서브타입 > 하위타입<br />
* 수퍼타입 > 상위타입<br />
* life-cycle > 생존주기<br />
* 라이프사이클 > 생존주기<br />
* 재팩토링 > 리팩토링<br />
* 어플리케이션 > 응용프로그램<br />
* 포맷터 > 형식자(formatter)<br />
* 사각 괄호 > 꺾쇠 괄호<br />
* 버저닝 > 버전 관리<br />
* 개발툴 > 개발 도구<br />
* 구체클래스 > 실체 클래스<br />
* 리스트 > 목록<br />
* 스냅숏 > 스냅샷<br />
* 유추 > 추측<br />
* 에디터 > 편집기?<br />
* 플러그인가능성 > 연결가능성(pluggability)<br />
* 스택 > 스택(Stack)<br />
* 이름 공간 > namespace<br />
* message(객체지향) > 메시지<br />
* 이벤트 위주(event-driven) > 이벤트 주도<br />
* 셀렉터 > 선택자<br />
* selector > 선택자<br />
<br />
<br />
<br />
===Common Trans Dictionary Additional===<br />
<br />
* 어휘(vocabulary) > 용어<br />
* 개관(overview) > 개요<br />
* 조작(manipulation) > 취급<br />
* describes(설명) > 서술/만든다.나타낸다<br />
* associate,연관,관련 > 조합<br />
* properties > 속성 - 고유명사 아닌경우<br />
* destructuring > 구조분해, 비구조화<br />
* assignment > 할당<br />
* type > 유형 - 고유명사가 아닌경우<br />
<br />
==Squeak By Example==<br />
<br />
* 도구 플랩 > Tools Flap<br />
* 몬테첼로 > 몬티첼로<br />
* 익명함수 > 이름을 가지지 않는 함수<br />
* Morphic halo > 모픽할로(Morphic halo)<br />
* directory > 디렉터리 (국어사전 참조)<br />
* inspect > 검사하다<br />
* logic > 논리<br />
* round > (11장에서) 반올림.<br />
* user > 사용자(o)<br />
<br />
<br />
<br />
==DPSC(Design Patterns Smalltalk Companion)==<br />
<br />
* 셸 > 쉘? 껍데기? 외형?<br />
* 바닐라 > 순수 (vanilla 에 대한 별도의 해석을 주석으로 넣자. not modified)<br />
* 컨텍스트 > 단어의 경우는 "문맥" 이지만 고유명사로 쓰일때는 컨텍스트 라고 쓰자<br />
* 팩토리 메서드 > Factory Method<br />
* 느린초기화(lazy) > 지연초기화<br />
* 프로그래밍의 미 > 프로그래밍의 아름다움<br />
* 루트 뷰 > 최상위 view<br />
* 빌딩 > 빌딩(building)<br />
* 싱글톤 > singleton<br />
* 오퍼레이션 > 동작<br />
* 어댑터(패턴을 의미할때) > Adapter<br />
* 매핑 > 대입<br />
* 모델링 도메인 > 도메인 모델링<br />
* 비주얼 > 시각요소?<br />
* 브로커 > Broker? 중개? 중개인?<br />
* 대체할 수 있는 어댑터 > 대체 가능 어댑터<br />
* 선택기(원문단어 확인필요) > 셀렉터 ? Selector ? 선택자?<br />
* 애플리케이션 모델 > 애플리케이션 model<br />
* 스트로크 > 한선긋기<br />
* 행위(원문단어 체크필요) > behavior<br />
* 비주얼웍스 > Visual Works<br />
* 비주얼 스몰토크 > Visual Smalltalk<br />
<br />
==Smalltalk Objects and Design==<br />
<br />
* 갈림 > 분기<br />
* 디자인 > 설계(경우에따라)<br />
* 트랜잭션로그 > 거래기록<br />
* recursion > 순환호출<br />
* 뷰 > View(경우에따라)<br />
* 모델 > Model(경우에따라)<br />
* 방송 > broadcast(경우에따라)<br />
* 솔리테르 > 솔리테어<br />
* 액션 > 행동 또는 동작(경우에따라)<br />
* 동형이성 > 동이형성<br />
* description 또는 기술 > 서술<br />
* 모핑 또는 morph > 변형<br />
* 조회(query) > 질의<br />
* business > 비즈니스<br />
<br />
<br />
<br />
==Deep into Pharo ESUG 2013==<br />
<br />
* zero 설정 > zero configuration? 최초설정?<br />
* 리턴 > 반환<br />
* resolve > 해결? 의결?<br />
* 서빙 > 제공<br />
* 트리거 > 계기 또는 꾸미다 또는 방아쇠<br />
* timeout > 시간만료<br />
* 시퀀스 > 장면 또는 순서<br />
* 개인 설정 > 미정(원문확인필요)<br />
* 맞춤 설정 > 미정(원문확인필요)<br />
* pragma > 컴파일러 지시문(pragma)<br />
** https://rmod.inria.fr/archives/papers/Duca16a-Pragmas-IWST.pdf<br />
** http://mousevm.tistory.com/28<br />
* 디자인 > 설계(경우에따라)<br />
* 매치(정규식) > 대입? 일치?<br />
* 문자열 > string(경우에따라)<br />
* 스트림 > stream(경우에따라)<br />
* 로그 > 기록(경우에따라)<br />
* 액션 클릭 > 의미 조사 필요<br />
* 셋, 세트 > 설정하다<br />
* 국부적 > 의미 조사 필요, 원문 조사 필요<br />
* 릴리즈, release > 출시? 놓다?<br />
* dispatch > 급파? 발송?<br />
* 액션 > 행동? 동작?<br />
* 루프 > 반복? 고리?<br />
* 언와인딩 > 풀림? 풀기?<br />
<br />
<br />
<br />
==The Art and Science of Smalltalk==<br />
<br />
* 스몰토크의 예술과 과학 > "The Art and Science of Smalltalk"<br />
* 루링 > 반복? 고리?<br />
* snippet > 조각? 토막?<br />
* templete > 템플릿<br />
* 업데이트 > 갱신(경우에따라)<br />
* 풀 > pool<br />
* 컬렉션 > Collection(경우에따라)<br />
* 체이닝 > 연쇄<br />
* 어댑터 > adapter<br />
* 에선 > 에서는<br />
* 하곤 > 하고는<br />
* 하길 > 하기를<br />
* 하여 > 해서(경우에따라)<br />
* 들어보겠다 > 들어보자<br />
* 통지자 > notifier(원문확인필요)<br />
* 플러그인가능성 > 연결가능성(pluggability)<br />
<br />
<br />
<br />
==Smalltalk Best Practice Patterns==<br />
<br />
* 요소 > element(경우에따라)<br />
* 매핑 > 대입<br />
* 명칭 공간 > namespace(원문확인 필요)<br />
* 명명 > 이름짓기</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=DesignPatternSmalltalkCompanion:2.3&diff=5592DesignPatternSmalltalkCompanion:2.32018-07-24T09:27:39Z<p>Onionmixer: 검수 20180724</p>
<hr />
<div>===2.3 장면 3: 데이터베이스 스키마와 Dream===<br />
<br />
몇 주 후 돈은 MegaCorp 회사 복도에서 제인과 마주친다.<br />
<br />
{| style="border: none; width:100%;"<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 제인! 지난번에 Interpreter 패턴을 알려줘서 고마워. 덕분에 QueryStrategy를 Abstract-ReimbursementStrategy의 또 다른 서브클래스로 빌드(build)할 수 있었어. 패턴에서 말한 대로 파서(parser)와 파스 노드(parse node)의 계층구조를 빌드해서 특정 원칙이 적용되었는지 결정하기 위해 사용했거든. 패턴을 이해하고 나니 정말 간단하던걸.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 잘됐네. 도움이 됐다니 기쁜걸.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 그 뿐만 아니라 시스템 내에 다른 패턴들도 찾기 시작했어. 나도 알지 못했는데 ''[디자인 패턴]''에 소개된 패턴을 우리가 이미 몇 가지를 쓰고 있더라구. 예를 들어 Policy 클래스에는 현장에서 일하시는 분들에게 적용되는 사전 구성된 표준 문안 버전이 여러 개가 있었어. 그들이 새로운 Policy 를 만들길 원하면 그들은 기존 표준 문안 버전을 선택해 시스템이 그것을 복사하도록 시키지. 알고 보니 Prototype 패턴이더라구. 패턴을 읽고 나니 바로 이해가 됐어. 사실 패턴에서 "Prototype Manager" 라는 발상이 마음에 들어서 우리 설계에도 이용하게 됐어.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 놀라운 일도 아냐. 책에서는 디자인 패턴의 문제에 대해 가장 공통되고 많이 수용되는 해법의 일부를 실어 놨거든. 계속해서 재발견될 것이라 보는 편이 옳아.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 또 한 가지 재미있는 점은 우리 코드에서 패턴을 발견하게 된 후로 내가 사람들에게 이제부터 패턴의 약칭 사용하자고 설득했어. 이제 "그 클래스는 Decorator일 뿐이야" 아니면 "여기 템플릿 메서드(Template Method)를 여기 사용하는 게 어때?"라고 약칭으로 서로 대화하고 있어.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 맞아, 그 점이 디자인 패턴의 가장 귀중한 부분 중 하나야: 설계자들이 서로 의사소통할 수 있는 공통된 단어를 제공하는 거지. 이제 작업할 용어집이 동일하니까 설계에 대해 이야기하는 것도 수월해지고 있어.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 나도 동의해. 디자인 패턴으로 인해서 팀이 일하는 방법이 바뀌었어. 넌 요즘에 무슨 작업을 하고 있어?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 사실 지난 수개월 간 작업해온 관계형 데이터베이스 액세스 프레임워크를 기록하고 있어. 설계에 사용된 디자인 패턴을 내가 쓰는 시스템의 기록문서에서 일부로 쓸 계획이야.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 한 번 보여줄 수 있어?!<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 그래. 화이트보드를 봐. ''[둘은 제인의 칸막이 사무실로 가 제인이 화이트보드에 그려진 다이어그램을 가리킨다]'' 여기서 시작해볼게. ''[ForeignKeyProxy라는 이름의 클래스를 가리킨다]'' 이 클래스는...<br />
|}<br />
<br />
<br />
[[image:dpsc_2.3_01.png]]<br />
<br />
<br />
{| style="border: none; width:100%;"<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| ''[제인의 말을 가로채며]'' 클래스 이름만 봐도 알아볼 수 있겠는걸. 이건 Proxy야. 즉 실제 객체가 로드(load)될 때까지 다른 객체를 대신 실행시키지. 그치?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 너 이제 완전히 이해했구나? 맞아. 객체는 데이터베이스 테이블에 Foreign key 를 계속 보유하고 있어. 필요시에 그 테이블로부터 실제 객체를 인스턴스화시키도록 DatabaseBroker 와 같이 실행하면 실제 객체로 대체되지.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 음, 그건 클래스 이름으로 확실히 드러나는군. 이 설계에 또 어떤 패턴을 사용했어?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 전체 서브시스템에서 Facade 역할을 하는 DatabaseBroker 클래스가 있어. 클래스의 클라이언트들이 서브시스템의 상세내용을 볼 필요가 없음을 의미하지; 단지 이 클래스의 public 인터페이스를 통해 실행될 뿐이야. 그리고 여기 DatabaseConnection 클래스는 Singleton이지. 이건...<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 시스템에는 해당 클래스의 인스턴스가 한 번에 하나만 존재한다는 걸 의미하지. 그럼 나머지 클래스들은? 거기에 해당하는 패턴이 없을까?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| ''[씩 웃으며]'' 응, 없어. 디자인 패턴은 알고 나면 빠지기가 쉬워. 어떤 때는 패턴이 없는데도 보이기 시작해. 디자인 패턴이 모든 문제를 해결해주진 못할 거야; 단지 좀 더 공통으로 나타나는 문제에 해법을 찾도록 도와줄 뿐이지. 그나저나 너희 팀에서 사용자 인터페이스 클래스의 문서화를 시작할 참이라고 들었는데?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 맞아. 혹시 그 클래스에서도 패턴을 찾아봐야 할까? 하지만 대부분이 VisualWorks GUI 빌더(VisualWorks GUI builder)로 빌드했는데. 우리가 만든 클래스를 몇 개 추가하긴 했지만 대부분은 도구(tools)에서 제공하는 것들을 이용했거든. <br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 하지만 패턴은 기존 도구와 애플리케이션이 어떻게 작동하는지 설명하기에 좋은 방법이야. 예를 들어 VisualWorks의 ApplicationModel 클래스는 창에서 다른 뷰(View)들 간의 Mediator 처럼 행동하거든. 그리고 VisualWorks 가 와 모델(Model) 간 업데이트를 다루는 방법은 Observer 패턴을 구현한 거야. <br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 있잖아, 우리 시스템의 몇 가지 측면을 그렇게 설명하면 도움이 될 것 같아. 우리 이야기를 바탕으로 정리해보면, 새로운 설계를 빌드하거나 다른 설계자들과 설계 선택권에 대해 이야기할 때, 또는 기존 설계를 문서화시킬 때 패턴을 이용하면 좋겠다는 결론이 나오네. 그렇지?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 정확해. 그건 그렇고, ''[디자인 패턴]'' 책은 언제쯤 돌려받을 수 있을까? <br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| ''[웃으며]'' 다 읽고 나면 줄게.<br />
|}<br />
<br />
<br />
<br />
[[Category:DesignPatternSmalltalkCompanion]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=DesignPatternSmalltalkCompanion:2.2&diff=5591DesignPatternSmalltalkCompanion:2.22018-07-24T09:22:21Z<p>Onionmixer: 검수 20180724</p>
<hr />
<div>===2.2 장면 2: 원칙은 깨어져선 안 된다.===<br />
<br />
''[며칠 후, 돈은 까다로운 설계 문제로 인해 제인에게 도움을 요청한다.]''<br />
<br />
{| style="border: none; width:100%;"<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 제인, 시간 있어?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 물론. 무슨 일이야?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 또 다른 설계 문제가 생겨서 말이야, 너한테 도움을 받을 수 있을까 해서.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 물론이지.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 심사 과정이 어떻게 작동할지에 대해 몇 가지 측면을 알아보려고 해. Claim이 어떻게 수락되어 지급되는지와 어떻게 거부되는지에 대해서 말이야.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 응, 지난번에 네가 설명해줬던 기억이 나.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 내 문제는 바로 이거야. 보상금을 청구하는 방법으로 이렇게 내려오거든. 요구문서에서 설명하는 원칙들이 서로 달라서 말이야. 어떻게 나타낼지 모르겠어.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 요구문서 좀 보여줄래? ''[돈이 문서를 건네주고 다음 단락을 함께 읽는다]''<br />
<br />
* 정책은 보장되는 항목의 목록으로 구성되어 있고, 각 항목과 관련된 지급원칙도 함께 있 다. 지급에는 몇 가지 원칙이 있다:<br />
* 특정 항목에서는 절차와 관련해 전혀 보상하지 않는다 (예: 보상 거부).<br />
* 특정 항목에서는 균일한 달러 금액을 지급한다 (예: 123.4 절차의 경우 $25를 지불한다).<br />
* 특정 항목에서는 해당하는 비용 중 일정 비율만 지급한다 (예: 234.5 절차의 경우 병원비의 50%를 지급한다).<br />
<br />
그리고 지급원칙이 두 가지가 더 있어.<br />
<br />
* 손절매 원칙(Stop-loss rule)은 개별 항목에 지급되는 최대금액을 조절한다 (예: 비용의 70% 또는 $500 중 적은 금액을 지급한다). <br />
* 쿼리기반의 원칙(Query-based rule)은 청구인의 속성을 기반으로 한다 (예: 청구인이 여성일 경우 본 절차에 $200를 지급하고; 그 외에는 $150를 지급한다).<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 어떻게든 이런 절차 코드를 서로 다른 원칙에 맞출 수는 있었어. 하지만 각 원칙은 사용자가 결정한단 말이지! 원칙을 Policy로 코드화할 수 있으면 좋겠지만, 문제는 사용자가 어떤 원칙을 원할 것인지 미리 아는 방법이 없다는 거야. <br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 내가 보기엔 각각의 "원칙"들이 보상 전략(Strategy)으로 들리는데?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 그렇게 말하니 흥미로운데? 무슨 의미지?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| Strategy는 또 다른 패턴이야. Strategy 패턴의 의도를 읽어줄게 ''[디자인 패턴 서적에 다가가 책을 펼친 후 다음 문장을 읽는다]'' : "알고리즘 군을 정의하고, 각각의 알고리즘을 별도의 클래스로 캡슐화하여 상호 교환이 가능하게 만든다. Strategy 패턴은 이를 사용하는 클라이언트에게 영향을 주지 않고 독립적으로 다양하게 나타난다."<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 글쎄, 보상은 알고리즘이 아닌 것 같은데. 그래도 시도는 해보고 싶어. Strategy 패턴이 어떻게 실행되는지 보여줘 봐.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 각각의 원칙을 하나의 Strategy 객체로 만들 순 없는지 살펴보자. 기본적으로 각 원칙은 한 라인 항목에 대한 총합을 계산하는 것이 맞지?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 맞아.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 그러니까 각각의 객체는 reimbursementFor: aLineItem라고 불리는 메시지를 이해해야만 해. 각 원칙마다 하나의 클래스가 있어. 네가 설명한 첫 번째 원칙이 뭐였지?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| "절차와 관련해 전혀 보상하지 않는다."<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 특별한 사례의 경우네. 이 원칙을 reimbursementFor: 로 구현하는 건 쉽겠어: 반환(return) 값이 항상 0이 되니까.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 두 번째는 어때? 균일한 달러 금액을 지급하는 원칙 말이야. <br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 보상금액을 계속 유지하는 인스턴스 변수가 하나 있을 거야. 그럼 reimbursementFor: 메소드가 그 인스턴스 변수 값을 반환할거야. <br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 이제 이해가 가기 시작했어. 그럼 "라인 항목에 해당하는 비용 중 일정 비율만 지급"하는 원칙은 퍼센트를 저장하고 reimbursementFor: 메소드가 라인 항목의 비용에 퍼센트를 곱한 결과를 반환하겠지?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 맞아. 3개의 클래스 계층구조는 이렇게 되겠지. <br />
|}<br />
<br />
[[image:dpsc_2.2_01.png]]<br />
<br />
{| style="border: none; width:100%;"<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| Claim은 특정 절차 코드에 어떤 원칙을 적용할 것인지 찾아본 후에 그 절차에 맞는 원칙을 적용할 필요가 있어. <br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 나쁘지 않은 걸! 문제 일부는 해결됐네. 그럼 다음 건 어때? 손절매 원칙은 앞서 말한 두 원칙처럼 작동하질 않아. 다른 원칙 하나를 먼저 실행한 후에 그 결과를 바탕으로 상환금액을 결정해야 할 것 같은데. <br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 네가 뭔가 중요한 걸 발견한 것 같은데. 한번 찾아볼게 ''[디자인 패턴 서적을 들고 책을 넘기기 시작한다]''. 여기 새로운 행위를 추가하기 위해 런타임에서 객체의 행위를 변경하는 방법을 알려주는 패턴이 있어. 아, 찾았다! Decorator 패턴이야 ''[책을 넘겨 177페이지에 실린 다이어그램을 돈에게 보여준다]''. 이 다이어그램에서 어떻게 Decorator가 기존 객체가 아닌 다른 객체의 인스턴스를 포함시키면서 기존 객체와 동일한 인터페이스를 구현하는지 이해하겠어? 네 문제도 비슷하게 해결하면 될 것 같아. <br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 또 헷갈리네. 기존 객체는 뭐고 다른 객체는 뭐야?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 구체적으로 말해줄게. StopLossStrategy라 불리는 다른 원칙이 있다고 가정하고, 그 안에 다른 Strategy가 포함되어 있다고 쳐. 그럼 이 원칙은 원칙에 포함된 Strategy 로 메시지를 전송(forwarding)한 다음 그 결과가 손절매 금액을 초과하는지 확인함으로써 reimbursementFor: 메소드를 구현하는 거야.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| '[망설이며]'' 네가 무슨 말을 하는지 알 것 같아. 이런 모양이 되겠다는 말이지? ''[제인의 그림을 가져와 새로운 클래스를 추가한다]''.<br />
|}<br />
<br />
[[image:dpsc_2.2_02.png]]<br />
<br />
{| style="border: none; width:100%;"<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 맞아.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 그럼 이 디자인 패턴이란 건 정말 효과적인걸! 하나의 설계에 하나 이상의 패턴을 통합했잖아. 앞에서 네가 설명한 방법대로라면 독자적으로 작동할 줄 알았거든.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 항상 그런건 아냐. 디자인 패턴은 하나만 사용할 수 있지만 대개는 하나의 설계에 몇 가지 패턴이 함께 사용되는 걸 볼 수 있을 거야. 함께 자주 사용되는 특정한 패턴들이 있어. ''[디자인 패턴]''의 각 패턴 마지막 부분에 "관련 패턴"이라는 절에서 열거하고 있어. <br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 저자들이 그걸 생각해 냈다니 다행이야. 하지만 어떤 패턴을 적용할 것인지는 어떻게 알지? 넌 그냥 임의로 뽑은 것 같아서 말이야. <br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 꼭 그렇지만도 않아. 내가 패턴을 선택하는 과정이 전혀 과학적이지 않다는 건 인정하지만 말이야. 지난 번 PLoP(Patterns Languages of Programs) 학회에서 누군가 한 패턴 전문가에게 어떻게 패턴을 결정했는지 질문하는 걸 우연히 들었거든. 전문가는 "내가 읽었지만 절반은 잊어버린 종이 조각에 대해 생각해보고, ''[디자인 패턴]'' 내부의 맨 앞 양면 페이지를 살피기도 하고, 때로는 추측에 맡겼죠,"라는 말을 했어. 패턴을 읽고 제2의 천성이 될 때까지 충분히 적용해보는 것이 핵심인 것 같아.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 그렇구나. 패턴에 관해 실제로 읽을 필요가 있는 것 같아.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 정말 많이 배울 수 있을 거야. 내 책을 빌려갈래? ''[책을 돈에게 건네준다]''.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 당연하지. ''[그는 책을 받는다]''. 가서 이 설계를 문서화 시켜야겠어. 그건 그렇고 쿼리기반의 원칙에 대해서는 해줄 말 있어?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 글쎄, 런타임 쿼리를 처리하는 Interpreter 패턴을 이용할 수 있지. ''[디자인 패턴]'' 243페이지를 찾아 봐.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 좋았어! 한 번 읽어볼게. ''[제인의 칸막이 사무실을 급히 뛰어나가며 책을 흔든다]''<br />
|}<br />
<br />
<br />
<br />
[[Category:DesignPatternSmalltalkCompanion]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=DesignPatternSmalltalkCompanion:2.1&diff=5590DesignPatternSmalltalkCompanion:2.12018-07-24T09:19:56Z<p>Onionmixer: 검수 20180724</p>
<hr />
<div>===2.1 장면 1: 혼란에 빠지다===<br />
<br />
이 이야기는 피로한 기색이 역력한 돈이 조용히 앉아 키보드를 치고 있는 제인의 칸막이 사무실로 다가가는 장면부터 시작된다.<br />
<br />
<br />
{| style="border: none; width:100%;"<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 저기, 제인, 혹시 이 문제 좀 도와줄 수 있을까? 몇 일간 이 요구문서(requirements document)를 보고 있는데 도저히 이해할 수가 없어서 말이야. <br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 그래, 그러지 뭐. 뭐가 문제야? <br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 이게 바로 내가 요청 받은 보험청구처리 작업흐름 시스템인데 말이야. 객체들이 어떻게 작동할지 알 수가 없어. 시스템에서 기본 객체는 찾았다고 생각되는데 객체의 행위를 어떻게 이해를 할지 모르겠어. <br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 여태까지 한 작업을 보여줄래? <br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 여기, 요구문서1 중에 문제가 되는 부분을 보여줄게.<br />
<br />
# ''데이터 입력.'' 다양한 시스템으로 구성되어 있으며, 다양한 소스로부터 건강보험청구가 수신된다. 이는 모두 유일한 식별자를 할당함으로써 기록된다. 보험청구 문서와 지원문서가 스캔된다. 스캔된 보험청구 문서와 팩스는 OCR(광학식 문자 인식)로 처리되어 각 양식 필드('''form field''')와 관련된 데이터를 읽어낸다.<br />
# ''유효성 검사''. 스캔되어 입력된 양식은 일관성 있고 완벽하게 기입되었는지 보장받기 위해 유효성 검사를 거친다. 불완전하거나 부적절하게 기입된 양식은 시스템에 의해 거부되고, 청구자에게 재접수 요청이 전송된다.<br />
# ''제공자/보험의 일치''. 자동화된 처리에 의해 보험청구에 기입된 보험 (현재 지급하려는 보험청구의 계약) 및 의료보험 제공자(예: 의사)와 전체적인 보험청구처리기관과 계약한 제공자들 중 일치하는 결과를 찾기 시작한다. 정확하게 일치하는 결과가 없을 경우, 프로그램은 사운덱스 기술을 (Soundex technology; 유사한 발음의 단어를 찾아내는 알고리즘) 바탕으로 가장 비슷한 결과를 확인한다. 시스템이 가능성 있는 결과를 일치성이 높은 순으로 정리하여 정보 취급자에게 보여주면 그는 정확한 제공자를 식별한다.<br />
# ''자동 판정''. 시스템은 보험청구금액을 지급할 수 있는지와, 보험청구와 관련된 주요 데이터 항목 간에 비일관성이 발견되지 않는다는 조건에 한해 얼만큼 지급하는지를 결정한다. 비일관성이 발견될 경우 시스템은 적절한 보험청구 심사에 의한 처리를 받을 수 있도록 보험청구를 "보류('''pend''')"한다.<br />
# ''보류된 보험청구의 심사''. 심사관은 보험청구 이력 또는 청구서 원본의 제출을 확인하기 위해 시스템에 접근할 수 있다. 심사관은 지급액에 대한 보험청구를 허락하여 적절한 지급액을 명시하거나 보험청구를 거부하는 답변서('''correspondence''')를 생성시킨다.<br />
<br />
나머지 시스템이 중심으로 다루는 "Claim"이 있는 걸 확인했거든. 근데 방해가 되는 요소는 바로 Claim이 사람마다 다르단 점이야! Claim에 작업할 때 내가 대화하는 상대마다 다르게 나타나는 것 같아. 매번 책임성을 확인했다고 생각할 때마다 또 새로운 것이 나타나. 뿐만 아니라 "재량껏 수정하세요"나 "아끼세요"와 같이 Claim에 대한 단순한 책임성은 작업흐름의 어디에 위치하는지에 따라 여러 의미를 가질 수 있는 것 같아. <br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 으음. 나도 이런 걸 읽은 적이 있는데. 한 번 찾아볼게. ''[그녀는 그녀의 책상에 놓인 서류를 바스락거린다]'' 어, 찾았다. Object Magazine 이번 호에 주문 관리 시스템에서 State 패턴을 이용하는 데 관한 기사가 하나 있어.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 내 말을 뭘로 들은 거야? 이건 주문 관리 시스템이 아니라 보험청구 처리 시스템이야.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 알고 있어, 그치만 저자가 State 패턴을 작업흐름 문제에 적용시킨 방법이 재밌어. ''[잡지를 뒤적인다]'' 그래, 여기 기사야. ''[기사를 훑어본다]'' 흐음…그렇지…Claim이 위치할 수 있는 상태 전이 다이어그램('''state transition diagram''')을 그려봤어?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| ''[놀란다]'' 상태 전이 다이어그램? 아니, 그건 생각을 못해봤어. 그걸 왜 그려야 하지?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 이렇게 되는 거야. 작업흐름 실행은 State 간에 전이가 발생하는 거야. 각 Claim은 마지막에 발생한 일에 따라 그것이 위치할 수 있는 상태의 집합을 가져. 예를 들어서 네 작업흐름에서 첫 항목이 뭐지?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 데이터 엔트리.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 그 다음은?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 유효성 검사.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 그럼 모든 Claim이 유효성 검사로 통과하는 거야?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 아니. 일부는 유효성 검사단계에서 거부 돼. 통과하는 경우에만 보험과 보험 제공자에 일치하는 결과를 찾지.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 그럼 다이어그램을 이렇게 그려보겠어? ''[제인이 펜과 종이를 들어 다음의 다이어그램을 그린다.]''<br />
|}<br />
<br />
[[image:dpsc_2.1_01.png]]<br />
<br />
{| style="border: none; width:100%;"<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| style="text-align:left;float:left;" | ''[조심스럽게]'' 응, 해볼 순 있지. 이걸로 뭘 하는거지?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| style="text-align:left;float:left;" | 그냥 시키는대로 따라해 봐. 작업흐름에서 다음 단계는 뭐야? <br />
|}<br />
<br />
''[두 사람은 작업흐름의 모든 단계를 그려 마침내 다음과 같은 다이어그램이 완성되었다.]''<br />
<br />
[[image:dpsc_2.1_02.png]]<br />
<br />
{| style="border: none; width:100%;"<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 좋아. 상태 전이 다이어그램이 완성됐어. 그런데 이 그림이 어떻게 Claim 작업을 이해하는데 도움이 된단 말이지?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 글쎄, 기사에선 State 디자인 패턴을 작업흐름 객체 설계에 적용할 수 있다고 나와 있어. 이전 다이어그램에 있는 각 State가 클래스가 될거야. Claim은 State 클래스 중 하나의 인스턴스로 변경 돼. 네 설계는 이런 모양이겠지: ''[제인은 새 용지로 넘겨 다음 페이지에 그려진 OMT 다이어그램을 그린다]''<br />
|}<br />
<br />
[[image:dpsc_2.1_03.png]]<br />
<br />
{| style="border: none; width:100%;"<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 다시 묻지만 이게 어떻게 도움이 된단 말이지?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| ''[소리 없이 활짝 웃으며]'' 거의 다 돼가! 자, 네가 겪는 문제 중 하나는, 작업흐름 내에 보험청구의 위치에 따라 "수정"의 의미가 변한다는 거였지?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| ''[대화의 중심이 본인 문제로 돌아갔다는 사실에 기뻐하며]'' 맞아. 예를 들어 데이터를 입력하는 사람은 이런 UI('''사용자 인터페이스''')를 보게 돼 ''[인터페이스 그림 하나를 보여준다]''. 반대로 시스템이 보험과 제공자에 일치하는 결과를 찾지 못하면 이런 화면이 나타나 ''[두 번째 그림을 보여준다]''. 그리고 보험청구를 조정해야 하는 심사관은 이런 화면을 보게 돼 ''[세 번째 그림을 보여준다]''. 이 중 어떤 화면을 보여줘야 하는지가 문제야.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 음, 그렇다면 "수정"을 위한 코드를 Claim이 속하는 아래쪽 특정 State로 위임하는 것이 네 일이야. 여길 예로 들어보자 ''[클래스 다이어그램에서 Entered 상태를 가리킨다]''. Claim이 이 상태에 있다면 네가 방금 나에게 보여준 첫 번째 화면으로 돌아가는 거야. Claim이 MatchPended 상태에 있으면 두 번째 화면으로, 그리고 AdjudicationPending 상태에 있으면 마지막 화면으로 돌아가는 거야. Claim은 어떤 화면을 표시해야 할 지 알아내기 위해 해당 State에서 화면을 요청한 후 그 UI에게 화면을 열도록 시키는 거지.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| ''[부드럽게]'' 내가 생각했던 것보다 훨씬 간단하네. 난 어떤 사람이 Claim을 마지막으로 실행했는지 확인한 후에 그 결과를 바탕으로 화면을 표시하는 조건 코드가 훨씬 많을 거라고 예상했거든. 이 방법이 훨씬 간단하구나.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 이것이 바로 디자인 패턴의 힘이라고 할 수 있지.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 조금 전에도 그 말을 언급했지? 대체 무슨 뜻이야?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 디자인 패턴은 계속해서 발견되는 디자인 문제에 대해 반복해서 사용되는 해법에 대한 접근법이야. '''디자인 패턴'''이라는 책도 있지. 이 책은 23개의 패턴을 구축하고 있는데, 각 패턴은 특정 클래스의 문제에 객체 기반의 해법을 제시하고 있어.<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| ''[믿지 못하겠다는 듯]'' 그럼 이 책에 내 문제에 대한 답이 모두 있단 뜻이야?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 그런 건 아냐. 가장 많이 발생하는 문제와 해법을 몇 가지씩 소개하고 있어. 방금 우리가 이야기한 해법은 State 패턴이라고 불러. 각 패턴은 패턴이 만들어진 의도를 설명하는 문장으로 시작해. State 패턴은, "객체의 내부 상태에 따라 객체가 행위를 변경할 수 있게 한다"는 의도를 가지고 있어. 네 Claim에 필요한 패턴이라고 생각하지 않니? <br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 듣고 보니 그러네. 하지만 네가 이런 말을 하기 전엔 생각도 못했을 거야. 내게 이 패턴이 필요할지는 어떻게 알지?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 제인:<br />
| 기사를 읽었던 것이 행운이었어. 하지만 '''디자인 패턴'''에 소개된 패턴들에 좀 더 익숙해지고 나니 어떻게 새로운 문제에 더 쉽게 적용시킬 수 있을지가 눈에 보여. 빌려줄까?<br />
|-<br />
| style="text-align:right;width:40px;float:left;" | 돈:<br />
| 아니, 지금은 됐어. 이 새로운 설계를 쓰기 시작했거든. 나중에 빌려 갈게. 도와줘서 고마워!<br />
|}<br />
<br />
<br />
<br />
[[Category:DesignPatternSmalltalkCompanion]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=DesignPatternSmalltalkCompanion:Head02&diff=5589DesignPatternSmalltalkCompanion:Head022018-07-24T09:09:38Z<p>Onionmixer: 검수 20180724</p>
<hr />
<div>===아하!===<br />
<br />
디자인 패턴에 대한 구체적인 설명을 시작하기 전에 다수의 패턴을 동반하는 유형이 포함된 사례연구를 하나 소개하고자 한다. GoF(Gang of Four)는 [디자인 패턴]편 서문을 통해 디자인 패턴을 이해하는 동안 “응?” 에서 “아하!” 로 변하는 경험을 언급한 바 있다. 이번 장에서는 이러한 변화를 설명하는 이야기를 하나 제시하려 한다. 이 이야기는 3개의 장면으로 구성된다: MegaCorp 보험회사에서 근무하는 두 명의 Smalltalk 프로그래머들의 생활 중 3일간의 이야기이다. 돈(Don, 객체지향에 있어서는 초보자이지만 숙련된 사업분석가)과 제인(Jane, 객체와 패턴 전문가)의 대화를 살펴보도록 하자. 돈은 설계 중 발생한 문제를 제인에게 가져가서 함께 문제를 해결한다. 두 사람은 가상의 인물이지만 설계는 실제 이야기이며, 모든 설계는 Smalltalk 로 쓰여진 실제 시스템의 일부이다. 이번 장은 세심한 분석을 통해 디자인 패턴이 어떻게 실세계 문제에 대한 해답을 도출하는데 도움이 됨을 입증하는 것을 목표로 한다.<br />
<br />
<br />
[[Category:DesignPatternSmalltalkCompanion]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=DesignPatternSmalltalkCompanion:1.7&diff=5588DesignPatternSmalltalkCompanion:1.72018-07-24T09:06:11Z<p>Onionmixer: 검수 20180724</p>
<hr />
<div>===1.7 본 책에 사용된 규약===<br />
<br />
책에 사용된 규약은 거의 없으며, 있더라도 간단하게 사용된다. 해설 문단 중에 코드의 일부나 클래스 이름, 메서드 이름을 포함시킬 경우 이렇게 표기할 것이다. 설명 도중에 예제 코드 오프셋을 제공할 때는 다음과 같이 나타난다:<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
Class>>method: parameter<br />
"This is a sample instance method."<br />
^self == parameter<br />
</syntaxhighlight><br />
<br />
<br />
코드 예제에서 메서드 이름에는 대부분의 Smalltalk 디버거에서 이용하는 규약을 사용할 것이다:<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
ClassName>>methodName<br />
</syntaxhighlight><br />
<br />
<br />
즉 메서드 이름은 ">>" 우측에, 메서드가 위치하는 클래스는 ">>" 좌측에 표시한다. 따라서 클래스 메서드는 다음과 같은 형태를 가지게 된다:<br />
<br />
<syntaxhighlight lang="smalltalk"><br />
Class class>>method<br />
"This is a sample class method"<br />
^self basicNew initialize<br />
</syntaxhighlight><br />
<br />
<br />
이 책은 '''디자인 패턴'''의 구체적인 페이지를 자주 참조한다. 그리고 원본 패턴을 소개한 페이지를 참조하는 각 패턴의 제목을 표시할 것이다. '''디자인 패턴'''의 페이지를 참조한 경우 "DP nn," 으로 표기되며, 여기서 nn 은 페이지 수를 의미한다ㅡ예를 들어, "이것은 DP 84 에 나타난 C++ 코드와 일치한다." 등이 있다<br />
<br />
마지막으로, 본 서적에 사용된 다이어그램 대다수는 GoF 책에 사용된 규약과 동일하다. 예를 들어, OMT 스타일의 클래스와 객체 다이어그램뿐만 아니라 (Rumbaugh et al., 1991) 상호 다이어그램(la Jacobson et al. (1992)을 포함하고 있다. 표기에 대한 괜찮은 출처를 '''디자인 패턴''' 부록 B 에서 제공하고 있지만, 이 책에서는 이러한 구문과 의미를 분명히 밝히기 위해 해당 내용은 주석으로 제공한다.<br />
<br />
이 책의 클래스 다이어그램은 pseudocode<sup>의사(擬似)코드</sup> 또는 실제 Smalltalk 코드로 중요한 메서드에 대한 표기법을 향상시킴과 동시에 전체 OMT 표기명의 하위집합을 포함한다. 특정 클래스는 다음과 같이 클래스 다이어그램에 그려져 있다. 인스턴스 변수와 메서드 이름은 선택적이지만, 본 책에서는 명확한 의미전달을 위해 생략한다.<br />
<br />
[[image:dpsc_1.7_01.png]]<br />
<br />
OMT 스타일의 클래스 다이어그램은 다음과 같은 기본 형태를 가진다: <br />
<br />
[[image:dpsc_1.7_02.png]]<br />
<br />
클래스 다이어그램을 확실히 보여주는 예를 하나 더 들어보자:<br />
<br />
[[image:dpsc_1.7_03.png]]<br />
<br />
상호작용 다이어그램은 객체들 간의 동적인 런타임 상호작용을 설명한다:<br />
<br />
[[image:dpsc_1.7_04.png]]<br />
<br />
객체 다이어그램은 인스턴스 구조(의 일부)와 두 개 또는 그 이상의 인스턴스가 어떻게 관련되는지 (일반적으로 인스턴스 변수 참조에 의해) 보여준다. 객체 다이어그램은 메시지 전송과 같은 동적 정보는 표현하지 않는다. <br />
<br />
[[image:dpsc_1.7_05.png]]<br />
<br />
책에서는 필요 할때마다 각각 다른 다이어그램 표기법을 사용하고 있다. 예를 들어, State 패턴의 논의에서는 객체 또는 시스템이 위치할 수 있는 다양한 상태를 비롯해서, 상태 간 전이를 강요하는 메시지나 이벤트가 적힌 연결 링크를 보여주는 상태 전이 다이어그램을 이용해서 설명하는 것이 유용했다. 모든 표기법은 따로 설명이 필요 없을만큼 명백하게 표기되어 있거나, 본문에서 설명되어 있다.<br />
<br />
부디 '''Smalltalk Companion''' 을 즐기고, 그 결과로 학습할 수 있기를 바란다. 본문의 내용이 비록 생소하더라도 어느샌가 "아, 맞아, 이전에 이 패턴을 사용한 적이 있어,"라고 말하는 자신을 발견할 것이다. 숙련된 Smalltalk 이용자라 하더라도 학습할 내용은 많을 것이다. 당신이 새로운 내용을 학습했다면 이 책은 성공한 것이다. <br />
<br />
<br />
<br />
[[Category:DesignPatternSmalltalkCompanion]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=DesignPatternSmalltalkCompanion:1.6&diff=5587DesignPatternSmalltalkCompanion:1.62018-07-24T08:52:22Z<p>Onionmixer: 검수 20180724</p>
<hr />
<div>===1.6 Smalltalk 코드 예제===<br />
<br />
코드를 제시하는 목적은 특정 패턴이 Smalltalk 에서 구현될 수 있도록 해주는 한가지 또는 그 이상의 방법을 설명하는 것이다. 책을 읽는 독자는, 예제로 소개한 코드를 보고 "저런 방법 대신 이렇게 이렇게 하면 구현할 수 있을텐데" 라고 말할지도 모른다. 특정 설계 해법을 코딩할 때에는 항상 여러 가지 방법이 있으며 이를 모두 설명할 수는 없다. 그렇기 때문에 패턴을 실제 코드로 구현할 수 있는 하나 또는 그 이상의 방법을 제시하는 것이 목적이다. 그리고 나서 책을 놓고 책에서 배운 내용을 애플리케이션에 사용하는 것은 당신의 선택이며, 이런 과정으로 당신의 방식이나 미적 감각에 알맞은 구현 변형체를 찾을 수 있을 것이다.<br />
<br />
Smalltalk 예제에 대해서 말할때 '''Smalltalk Companion''' 에서 코드를 어떻게 형식화 시켰는지에 대한 문제도 일어난다. 코드 서식설정(formatting)은 까다로운 주제이다: "컴퓨터 과학자 두 명이 있다면 코드의 indent 방식에 대해 최소한 3가지 의견이 나올 것이다" (Wilson, 1997, p.122). 특정 코드 일부를 서식화 하려할 때에는 수 많은 방법이 있지만, 서식화는 결국 "종교적" 문제와 같다: "코드 서식화만큼 주제보다 사태를 악화시키는 주제는 없다"(Beck, 1997, p.171). 가장 중요한 점은 다른 프로그래머가 읽고 이해하기 쉽도록 코드를 서식화 해야 하며, 서식은 코드의 의미 전달을 도와야 한다는 점이다. 이를 언급하는 이유는 3명의 공동 저자들의 코드에서 포맷팅이 약간씩 차이가 있다는걸 알게될 것이기 때문이다. 그럼에도 불구하고 전체적인 예제는 Beck과 Skublics, 그리고 Klimas 와 Thomas (1996)의 포맷팅 지침서<sup>formatting guidelines</sup>에 제공된 일반적인 Smalltalk 양식을 따른다.<br />
<br />
<br />
<br />
[[Category:DesignPatternSmalltalkCompanion]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=DesignPatternSmalltalkCompanion:1.5&diff=5586DesignPatternSmalltalkCompanion:1.52018-07-24T08:34:05Z<p>Onionmixer: </p>
<hr />
<div>===1.5 어떤 Smalltalk 방언일까?===<br />
<br />
현 시점에서 Smalltalk 개발 환경은 수없이 많다. 지난 수년 간 월드와이드웹을 통해 Smalltalk 관련 사이트를 살펴볼 때마다 새로운 개발 환경이 발표되는 듯이 보였다. 물론 "Big Three" 는 있으며, ParcPlace, Digitalk, IBM 이 바로 그것이다. ParcPlace 는 Smalltalk-80 를 본뜬 Objectworks 라는 이름의 환경을 내놓았지만 최근에는 VisualWorks<ref name="역자주1">현재는 Cincom 사의 제품이다. http://www.cincomsmalltalk.com/main/products/visualworks/</ref> 라고 불린다. 그들은 월드와이드웹이 가능한 Smalltalk의 서버 버전을 VisualWave 란 이름으로 판매하기도 했다. Digitalk 환경의 공식명칭은 Smalltalk/V 이지만 현재는 Visual Smalltalk 로 부른다. 오래된 Smalltalk/V 환경 대다수를 아직도 이용할 수 있다. 사실 Smalltalk/V Win16(16비트 윈도우 버전)는 Smalltalk Express 처럼 Objectshare 로부터 WindowsBuilder Pro/V UI builder 와 결합해서 현재 무료로 사용할 수 있다. IBM Smalltalk 는 비주얼 프로그래밍 애플리케이션 제작도구인 VisualAge 와 번들되어 있지만 프론트 엔드식 비주얼 프로그래밍 환경 없이도 Smalltalk 개발 환경을 구매하거나 설치할 수 있다.<br />
<br />
ParcPlace 와 Digitalk 가 합병되어 ParcPlace-Digitalk 가 되었으며, 두 가지 Smalltalk 방언을 합병할 계획이라고 발표했다. 그러나 이 책을 쓰는 현재, ParcPlace-Digitalk 측에서는 기존 버전을 계속해서 지원하겠지만 Visual Smalltalk 에 대한 제품개발은 더 이상 이루어지지 않을 것이라고 발표한 상태다. 아직도 Visual Smalltalk 로 설계하고 프로그래밍하는 사람들이 많기 때문에 이 소식은 유감이 아닐 수 없다. <br />
<br />
Dolphin Smalltalk, Smalltalk MT, Squeak 을 비롯해 Cincom ObjectStudio (이전 명칭은 Enfin), Gemstone (객체지향 데이터베이스 시스템과 번들된 Smalltalk), SmalltalkAgents, Smalltalk/X, GNU Smalltalk, 그리고 그 외에도 우리가 생각해내지 못한 새로운 이름을 포함한 소규모의 Smalltalk 버전이 상당히 많다. 주 플랫폼마다 최소한 하나의 Smalltalk 버전이 있으며, PC (DOS, OS/2, 16비트와 32비트 윈도우 운영체제), Macs, UNIX 기계, IBM AS/400s, 심지어 IBM 메인프레임도 해당된다.<br />
<br />
본 책은 가장 오래 사용된, 그에 따라 사용자 수가 가장 많을 법한 Smalltalk 방언에 초점을 둔다. 이는 저자들의 사용 경험이 가장 많은 방언들이기도 하다. 물론 수시로 다른 환경을 참조하긴 하지만 주로 VisualWorks 와 Visual Smalltalk, 그리고 이보다는 경험이 적지만 IBM Smalltalk 도 고려한다. 이러한 환경들의 기반 클래스 라이브러리로부터 현재 사용 중인 다수의 패턴 예제를 도출해냈다. Smalltalk 환경으로부터 코드를 인용하거나 코드 예제를 제시할 때는 다양한 Smalltalk 방언마다 코드와 관련된 차이점이 있기 때문에, 어떤 방언을 설명하는지를 확실히 표시했다.<br />
<br />
하지만 Smalltalk 는 Smalltalk 라는 점을 명심하자. 여러 방언들 사이에 차이는 있지만, 차이점보다는 닮은 점이 더 많으며, 여기서 제시된 코드 예제의 대다수는, 그대로 또는 약간의 조정 후 어떠한 Smalltalk 방언에 적용해도 잘 작동할 것이다.<br />
<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:DesignPatternSmalltalkCompanion]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=DesignPatternSmalltalkCompanion:1.5&diff=5585DesignPatternSmalltalkCompanion:1.52018-07-24T08:33:26Z<p>Onionmixer: 검수 20180724</p>
<hr />
<div>===1.5 어떤 Smalltalk 방언일까?===<br />
<br />
현 시점에서 Smalltalk 개발 환경은 수없이 많다. 지난 수년 간 월드와이드웹을 통해 Smalltalk 관련 사이트를 살펴볼 때마다 새로운 개발 환경이 발표되는 듯이 보였다. 물론 "Big Three" 는 있으며, ParcPlace, Digitalk, IBM 이 바로 그것이다. ParcPlace 는 Smalltalk-80 를 본뜬 Objectworks 라는 이름의 환경을 내놓았지만 최근에는 VisualWorks<ref name="역자주1">현재는 Cincom 사의 제품이다. http://www.cincomsmalltalk.com/main/products/visualworks/</ref> 라고 불린다. 그들은 월드와이드웹이 가능한 Smalltalk의 서버 버전을 VisualWave 란 이름으로 판매하기도 했다. Digitalk 환경의 공식명칭은 Smalltalk/V 이지만 현재는 Visual Smalltalk 로 부른다. 오래된 Smalltalk/V 환경 대다수를 아직도 이용할 수 있다. 사실 Smalltalk/V Win16(16비트 윈도우 버전)는 Smalltalk Express 처럼 Objectshare 로부터 WindowsBuilder Pro/V UI builder 와 결합해서 현재 무료로 사용할 수 있다. IBM Smalltalk 는 비주얼 프로그래밍 애플리케이션 제작도구인 VisualAge 와 번들되어 있지만 프론트 엔드식 비주얼 프로그래밍 환경 없이도 Smalltalk 개발 환경을 구매하거나 설치할 수 있다.<br />
<br />
ParcPlace 와 Digitalk 가 합병되어 ParcPlace-Digitalk 가 되었으며, 두 가지 Smalltalk 방언을 합병할 계획이라고 발표했다. 그러나 이 책을 쓰는 현재, ParcPlace-Digitalk 측에서는 기존 버전을 계속해서 지원하겠지만 Visual Smalltalk 에 대한 제품개발은 더 이상 이루어지지 않을 것이라고 발표한 상태다. 아직도 Visual Smalltalk 로 설계하고 프로그래밍하는 사람들이 많기 때문에 이 소식은 유감이 아닐 수 없다. <br />
<br />
Dolphin Smalltalk, Smalltalk MT, Squeak 을 비롯해 Cincom ObjectStudio (이전 명칭은 Enfin), Gemstone (객체지향 데이터베이스 시스템과 번들된 Smalltalk), SmalltalkAgents, Smalltalk/X, GNU Smalltalk, 그리고 그 외에도 우리가 생각해내지 못한 새로운 이름을 포함한 소규모의 Smalltalk 버전이 상당히 많다. 주 플랫폼마다 최소한 하나의 Smalltalk 버전이 있으며, PC (DOS, OS/2, 16비트와 32비트 윈도우 운영체제), Macs, UNIX 기계, IBM AS/400s, 심지어 IBM 메인프레임도 해당된다.<br />
<br />
본 책은 가장 오래 사용된, 그에 따라 사용자 수가 가장 많을 법한 Smalltalk 방언에 초점을 둔다. 이는 저자들의 사용 경험이 가장 많은 방언들이기도 하다. 물론 수시로 다른 환경을 참조하긴 하지만 주로 VisualWorks 와 Visual Smalltalk, 그리고 이보다는 경험이 적지만 IBM Smalltalk 도 고려한다. 이러한 환경들의 기반 클래스 라이브러리로부터 현재 사용 중인 다수의 패턴 예제를 도출해냈다. 환경으로부터 코드를 인용하거나 코드 예제를 제시할 때는 다양한 Smalltalk 방언마다 코드와 관련된 차이점이 있기 때문에, 어떤 방언을 설명하는지를 확실히 표시했다.<br />
<br />
하지만 Smalltalk 는 Smalltalk 라는 점을 명심하자. 여러 방언들 사이에 차이는 있지만, 차이점보다는 닮은 점이 더 많으며, 여기서 제시된 코드 예제의 대다수는, 그대로 또는 약간의 조정 후 어떠한 Smalltalk 방언에 적용해도 잘 작동할 것이다.<br />
<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:DesignPatternSmalltalkCompanion]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=DesignPatternSmalltalkCompanion:1.4&diff=5584DesignPatternSmalltalkCompanion:1.42018-07-24T08:26:27Z<p>Onionmixer: 단어수정</p>
<hr />
<div>===1.4 패턴에 대한 논의===<br />
<br />
GoF 와 여기서 논의하는 패턴들은 특정 수준의 추상화를 가지고 있다. 일반적으로 패턴은 마이크로에서 매크로에 이르는 넓은 범위의 객체 지향 응용 프로그램에서 다양한 수준의 세분성 및 추상화 수준에서 발생할 수 있다. 메서드의 변수에 어떤 이름을 붙일것 인지, 지연 초기화를 어떻게 구현할 것인지, 코드를 어떻게 형식화 시킬것인지와 저수준 문제들에 대한 규약등은 마이크로패턴으로 간주한다(Kent Beck의 ''Smalltalk Best Practice Pattern [1997b]''<ref name="역자주1">http://trans.onionmixer.net/mediawiki/index.php?title=SmalltalkBestPracticePatterns</ref>에 나오는 패턴의 다수는 이 범주에 속한다). 그 반대로 VisualWorks 의 대화형 애플리케이션에 매우 중요한 아키텍처인 Model-View-Controller 프레임워크는 매크로패턴에 속한다. Buschmann et al.(1996)은 전체적으로 패턴은 ''용어'' 부터 ''디자인 패턴''까지, 그리고 ''아키텍처 패턴'' 까지 다양하다고 주장한다. 여기서의 디자인 패턴들은 핵심 분야에 중점을 둔다. 이 패턴들은 잘 설계되고 프로그램화된 애플리케이션들에게 유용한 마이크로 아키텍처의 구조와 구현을 설명한다.<br />
<br />
'''디자인 패턴''' 에서와 마찬가지로 이 책은 생성 패턴, 구조 패턴, 행동 패턴의 세 장으로 구성된다. 생성 패턴에서는 객체의 생성 과정을 다룬다. 구조 패턴의 경우, 구조의 컴포넌트 기능성을 효율적으로 향상시키기 위해 객체를 좀 더 복잡한 구조로 구성하게 한다. 행동 패턴에서는 시스템의 기능적 행위, 객체가 목표를 달성하기 위한 의사소통, 협력, 책임의 분산 방법을 다룬다.<br />
<br />
다음으로는 패턴의 ''범위<sup>scope</sup>''에 따라 패턴 이름을 제목으로 분류해서, 각 패턴을 주로 클래스에 적용하는지, 아니면 인스턴스에 적용하는지를 상세하게 다룬다. Class 패턴은 상속을 통해 설정된 클래스 간의 정적 관계에 중점을 둔다. Object 패턴은 인스턴스들 간 동적 런타임 관계를 포함한다. 예를 들어, Template Method 패턴은 상속과 메서드 재정의를 통해 구현되므로 클래스 동작으로 분류된다. 상위클래스는 하나 또는 그 이상의 이차 메서드를 호출하는 넓은 범위의 메서드를 정의하며, 하위클래스는 그에 종속된 메서드 일부 또는 전부를 오버라이드할 수 있다; 따라서 패턴의 컴포넌트는 모두 정적이며, 클래스를 기반으로 한다. Strategy 패턴은 객체 행위로 분류되는데, 그 이유는 Strategy 클래스의 정의에 종속을 포함하고 있음에도 불구하고 객체 간 상호작용이 패턴의 ''주요'' 특성으로 포함되기 때문이다.<br />
<br />
전체적인 구성은 동일하지만 이 책의 내용과 GoF 패턴의 형식이 완전히 중복되는 것은 아니다. 그렇다고 완전히 새로운 패턴을 작성하는 것도 아니며, GoF 가 이미 사용한 패턴들이다. 대신 이러한 패턴들에 대한 Smalltalk 버전을 제시하는 것이 우리의 목적이다. 따라서 GoF 패턴에 사용된 모든 하위단락이 필요한 것은 아니다. 이 책의 패턴은 다음과 같이 변하기 쉬운 내용을 싣고 있다. <br />
<br />
* '''의도<sup>intent</sup>''' 단락에서는 가끔씩 다른 말로 변경한 부분이 있지만, [디자인 패턴] 본문을 그대로 발췌하였다. <br />
* 패턴의 '''구조<sup>Structure</sup>''' 다이어그램. 대부분 '''디자인 패턴''' 다이어그램과는 다르다. 더 명확한 설명을 위해 많은 구조 다이어그램을 수정했으며, Class 객체 등 Smalltalk 에 요구되는 객체를 포함하거나, C++ 의 구현과 패턴에 대한 Smalltalk 버전을 반영하고자 하였다. 구조적으로는 동일할지 몰라도 C++ 가 아닌 Smalltalk 구문과 의미론을 반영한다. <br />
* '''논의<sup>Discussion</sup>''' 부분에서는 패턴의 동기, Smalltalk 버전과 C++ 해석의 차이, 패턴의 장단점, 패턴을 적용하려 할때 고민해야할 일반적인 사항들과 같은 주제들을 다룬다. 이 부분은 크리스토퍼 알렉산더의 패턴처럼 일반적인 설명으로 구성된다 (Alexander et al., 1977).<br />
* '''협력<sup>Collaborations</sup>''' 및 '''활용성<sup>Applicability</sup>''' 단락은 선택적이며, 때로는 논의 부분에서 다뤄지기도 한다.<br />
* '''구현<sup>Implementation</sup>''', Smalltalk 또는 일반적으로 패턴을 구현할때에 대한 관련 주제들을 다룬다.<br />
* '''예제 코드<sup>Sample Code</sup>''' 에서는 패턴을 사용하는 Smalltalk 코드를 제공한다. 가끔 내용이나 흐름상 적절한 상황이라면, 구현과 예제 코드를 하나의 단락으로 끼워 넣기도 한다. <br />
* '''알려진 Smalltalk 사용예<sup>Known Smalltalk Uses</sup>'''. 본 저서는 '''Smalltalk Companion''' 이기 때문에 디자인 패턴을 사용하는 Smalltalk 애플리케이션 및 Smalltalk 라이브러리 클래스만 제시하는 것이 옳다고 판단하였다.<br />
* '''관련 패턴<sup>Related Patterns</sup>'''. 이 부분은 선택적이다. ''디자인 패턴''에 언급된 내용으로 충분한 경우라면, 이 단락은 없다.<br />
<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:DesignPatternSmalltalkCompanion]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=DesignPatternSmalltalkCompanion:1.4&diff=5583DesignPatternSmalltalkCompanion:1.42018-07-24T08:24:02Z<p>Onionmixer: 검수 20180724</p>
<hr />
<div>===1.4 패턴에 대한 논의===<br />
<br />
GoF 와 여기서 논의하는 패턴들은 특정 수준의 추상화를 가지고 있다. 일반적으로 패턴은 마이크로에서 매크로에 이르는 넓은 범위의 객체 지향 응용 프로그램에서 다양한 수준의 세분성 및 추상화 수준에서 발생할 수 있다. 메서드의 변수에 어떤 이름을 붙일것 인지, 지연 초기화를 어떻게 구현할 것인지, 코드를 어떻게 형식화 시킬것인지와 저수준 문제들에 대한 규약등은 마이크로패턴으로 간주한다(Kent Beck의 ''Smalltalk Best Practice Pattern [1997b]''<ref name="역자주1">http://trans.onionmixer.net/mediawiki/index.php?title=SmalltalkBestPracticePatterns</ref>에 나오는 패턴의 다수는 이 범주에 속한다). 그 반대로 VisualWorks 의 대화형 애플리케이션에 매우 중요한 아키텍처인 Model-View-Controller 프레임워크는 매크로패턴에 속한다. Buschmann et al.(1996)은 전체적으로 패턴은 ''용어'' 부터 ''디자인 패턴''까지, 그리고 ''아키텍처 패턴'' 까지 다양하다고 주장한다. 여기서의 디자인 패턴들은 핵심 분야에 중점을 둔다. 이 패턴들은 잘 설계되고 프로그램화된 애플리케이션들에게 유용한 마이크로 아키텍처의 구조와 구현을 설명한다.<br />
<br />
'''디자인 패턴''' 에서와 마찬가지로 이 책은 생성 패턴, 구조 패턴, 행동 패턴의 세 장으로 구성된다. 생성 패턴에서는 객체의 생성 과정을 다룬다. 구조 패턴의 경우, 구조의 컴포넌트 기능성을 효율적으로 향상시키기 위해 객체를 좀 더 복잡한 구조로 구성하게 한다. 행동 패턴에서는 시스템의 기능적 행위, 객체가 목표를 달성하기 위한 의사소통, 협력, 책임의 분산 방법을 다룬다.<br />
<br />
다음으로는 패턴의 ''범위<sup>scope</sup>''에 따라 패턴 이름을 제목으로 분류해서, 각 패턴을 주로 클래스에 적용하는지, 아니면 인스턴스에 적용하는지를 상세하게 다룬다. Class 패턴은 상속을 통해 설정된 클래스 간의 정적 관계에 중점을 둔다. Object 패턴은 인스턴스들 간 동적 런타임 관계를 포함한다. 예를 들어, Template Method 패턴은 상속과 메서드 재정의를 통해 구현되므로 클래스 동작으로 분류된다. 상위클래스는 하나 또는 그 이상의 이차 메서드를 호출하는 넓은 범위의 메서드를 정의하며, 하위클래스는 그에 종속된 메서드 일부 또는 전부를 오버라이드할 수 있다; 따라서 패턴의 컴포넌트는 모두 정적이며, 클래스를 기반으로 한다. Strategy 패턴은 객체 행위로 분류되는데, 그 이유는 Strategy 클래스의 정의에 종속을 포함하고 있음에도 불구하고 객체 간 상호작용이 패턴의 ''주요'' 특성으로 포함되기 때문이다.<br />
<br />
전체적인 구성은 동일하지만 이 책의 내용과 GoF 패턴의 형식이 완전히 중복되는 것은 아니다. 그렇다고 완전히 새로운 패턴을 작성하는 것도 아니며, GoF 가 이미 사용한 패턴들이다. 대신 이러한 패턴들에 대한 Smalltalk 버전을 제시하는 것이 우리의 목적이다. 따라서 GoF 패턴에 사용된 모든 하위단락이 필요한 것은 아니다. 이 책의 패턴은 다음과 같이 변하기 쉬운 내용을 싣고 있다. <br />
<br />
* '''의도<sup>intent</sup>''' 단락에서는 가끔씩 다른 말로 변경한 부분이 있지만, [디자인 패턴] 본문을 그대로 발췌하였다. <br />
* 패턴의 '''구조<sup>Structure</sup>''' 다이어그램. 대부분 '''디자인 패턴''' 다이어그램과는 다르다. 더 명확한 설명을 위해 많은 구조 다이어그램을 수정했으며, Class 객체 등 Smalltalk 에 요구되는 객체를 포함하거나, C++ 의 구현과 패턴에 대한 Smalltalk 버전을 반영하고자 하였다. 구조적으로는 동일할지 몰라도 C++ 가 아닌 Smalltalk 구문과 의미론을 반영한다. <br />
* '''논의<sup>Discussion</sup>''' 부분에서는 패턴의 동기, Smalltalk 버전과 C++ 렌더링의 차이, 패턴의 장단점, 패턴을 적용하려 할때 고민해야할 일반적인 사항들과 같은 주제들을 다룬다. 이 부분은 크리스토퍼 알렉산더의 패턴처럼 일반적인 설명으로 구성된다 (Alexander et al., 1977).<br />
* '''협력<sup>Collaborations</sup>''' 및 '''활용성<sup>Applicability</sup>''' 단락은 선택적이며, 때로는 논의 부분에서 다뤄지기도 한다.<br />
* '''구현<sup>Implementation</sup>''', Smalltalk 또는 일반적으로 패턴을 구현할때에 대한 관련 주제들을 다룬다.<br />
* '''예제 코드<sup>Sample Code</sup>''' 에서는 패턴을 사용하는 Smalltalk 코드를 제공한다. 가끔 내용이나 흐름상 적절한 상황이라면, 구현과 예제 코드를 하나의 단락으로 끼워 넣기도 한다. <br />
* '''알려진 Smalltalk 사용예<sup>Known Smalltalk Uses</sup>'''. 본 저서는 '''Smalltalk Companion''' 이기 때문에 디자인 패턴을 사용하는 Smalltalk 애플리케이션 및 Smalltalk 라이브러리 클래스만 제시하는 것이 옳다고 판단하였다.<br />
* '''관련 패턴<sup>Related Patterns</sup>'''. 이 부분은 선택적이다. ''디자인 패턴''에 언급된 내용으로 충분한 경우라면, 이 단락은 없다.<br />
<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:DesignPatternSmalltalkCompanion]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=DesignPatternSmalltalkCompanion:1.3&diff=5582DesignPatternSmalltalkCompanion:1.32018-07-24T07:48:29Z<p>Onionmixer: 역자주3 주석내용 변경 및 추가</p>
<hr />
<div>===1.3 C++ != Smalltalk (또는 Smalltalk ~= C++)===<br />
<br />
스몰토크와 C++는 단순히 서로 다른 프로그래밍 언어가 아니다; 언어에 대한 설계와 언어로 프로그래밍하는 데에는 기본적인 차이가 있다. 둘 중 하나로 작업하는데 익숙한 설계자들은 디자인 문제와 해법을 서로 다르게 바라볼 것이다.<br />
<br />
개발자가 작업하는 언어가 문제 해법에 대해 생각하는 방식에 영향을 미칠 것이라는 주장을 자세히 살펴보자. 자연언어를 다루는 심리 언어학 영역을 가장 높은 수준에서 살펴보면, 언어와 사고는 서로 밀접하게 연결되어 있으며 서로 영향을 미친다는 사실을 재빠르게 확인할 수 있다. Benjamin Whorf (1956)는 "언어는 개인이 세상을 바라보는 방식 또는 개인이 사고하는 방식에 확실히 영향을 미친다"고 가정했다. 이러한 워프의 가설(Whorfian hypothesis)이 프로그래밍 언어에도 통용된다고 주장하는 사람들도 있음은 쉽게 짐작할 수 있다: 서로 다른 언어에 대한 다른 구문과 제어구조는 그 언어로 문제를 해결하는 방식에 영향을 미치는 것이다 (예: Curtis, 1985, 특히 6장 참조).<br />
<br />
이와 반대로 생각의 방식과 세계관도 언어에 영향을 미치는 것으로 보인다. 사실 많은 심리학 연구에서는 이것이 사실이라는 증거를 제시해왔다 (예: Anderson, 1985 참조). 따라서 Goldberg와 Kay (1977)가 주장한 바와 같이 객체지향 언어들은, 설계자들이 실세계에 대한 그들의 인식을 설명하는 모델에 더 가깝게 구축하도록 하기 때문에 "진화되었다"는 말이 사실일지도 모르겠다. 여러 저자들이 제시하였듯이 객체지향 디자인은 사람들이 문제 영역을 자연스럽게 모형화하는데 더 나은 짝을 제공하므로 절차지향식 언어의 설계보다는 좀 더 "자연적이다" (Rosson & Alpert, 1990; Cox, 1984). <br />
<br />
앞의 두 가지 관점을 모두 뒷받침하는 Soloway, Bonar, Ehrlich (1983)는 사람들이 프로그래밍 해법의 설계와 관련해 자연스럽게 생각하는 방식과 더 일치하는 인식을 제공하는 프로그래밍 언어가 보다 사용하기 수월하다고 설명하였다. 하지만 그들은 프로그래밍 언어가 개인이 선호하는 설계 전략을 변화시키며 특정 언어의 구성에 경험이 많을수록 디자인 선호도가 바뀐다는 사실도 발견했다. 결론은, 누군가가 구현하는 언어ㅡC++, Smalltalk 등으로ㅡ는 시스템과 애플리케이션에 대한 사고 및 설계 방식에 영향을 미칠것이라는 사실이다. Smalltalk 전문가들과 C++ 개발자들은 서로 다른 언어로 프로그램을 구성하는데 그치는 것이 아니라, 서로 다른 언어로 말한다. 예를 들어 Smalltalk 프로그래머에게 클래스 객체는 클래스를 나타내는 진실된 스몰토크 객체인 반면, C++ 개발자들에게 클래스 객체란 사용자가 정의한 클래스의 인스턴스이다 (즉 내장된 C-언어 데이터 타입이 아니라는 의미다). 두 언어와 환경 간에 개념적으로 중복되는 부분이 상당히 많음에도 불구하고, 서로 상당히 다른 의견을 포함하고, 다른 문제를 표면화시키며, 최종적으로 프로그램 설계에 대한 서로 다른 사고를 이끌어 낸다.<br />
<br />
반대로 설계 시점에서 목표 언어를 고려할 필요도 있다: "설계 시점에서 언어를 고려하지 않을 경우, 문제를 미해결 상태로 남길 수 있으며… 그러한 설계로 인해 형편 없는 프로그램이 되기도 한다" (Smith, 1996a). 목표 언어를 선택할 때 발생할 수 있는 제약과 기회를 고려하지 않고 설계하는 것은 실수일 것이다. 스몰토크의 경우 언어 자체뿐 아니라 클래스 라이브러리와 내장된 프레임워크도 고려해야 한다. 예를 들어, VisualWorks 의 Mode-View-Controller 프레임워크, Visual Smalltalk 의 Mode-Pane 프레임워크, IBM Smalltalk 의 Motif-style 상호작용 프레임워크를 생각해보자ㅡ각 프레임워크는 실제로 대화형 애플리케이션의 설계를 시작하기도 전에 특정 디자인의 결정이 필요로 하기도 한다.<br />
<br />
이러한 생각을 명심하고 Smalltalk 과 C++ 이 어떻게 다른지ㅡ구체적으로 말해, 두 언어가 어떻게 설계에 영향을 미치는지ㅡ간단히 살펴보도록 하자. 이를 살펴보는 목적은, 두 언어에는 수많은 기본적 차이가 있기 때문에, 개발자들이 문제에 대해 생각하고, 해법을 설계하며, 디자인 패턴을 구현하는 방식이 서로 다를 수 밖에 없다는 우리의 주장을 뒷받침하기 위함이다(어느 한 언어가 뛰어나다는 주장을 하려는 것이 아니며, 두 언어의 비교를 통해 그 중 한 가지 언어를 선호하는 팬들을 언짢게 만들었다면 미리 사과드린다). 이 과정에서 구체적 특성의 유무에 따라 영향을 받을 일부 패턴을 언급하고자 한다.<br />
<br />
<br />
<br />
'''순수한 객체지향 대 혼합 객체지향'''<br />
<br />
Smalltalk 은 "순수한" 객체지향 언어이다. Smalltalk 에서 모든 계산은ㅡ가장 원시적 수준(과 일반 프로그래밍 활동과 관련이 없는 수준)을 제외한ㅡ객체로 전송하는 메시지의 결과로만 발생된다. 모든 것은 객체이며, 여기에는 숫자, 문자, 문자열과 같은 원시 데이터 타입<sup>Primitive Data Type</sup>을 포함한다. 반면 C++ 는 복합 언어로서, 절차지향의 C언어 기반에 객체지향의 특성들이 추가된 언어이다. 언어에서 제공되는 비객체지향적 특성을 이용할 수 있기 때문에, 설계자들과 프로그래머들에게서 서로 다른 사고방식을 이끌어 낸다. 예를 들어, C++ 에서는 어떠한 클래스에도 소속되지 않은 전역함수를 가질 수 있는 반면, Smalltalk 에서는 각 기능 부분의 책임을 누가 지는지를 반영해야만 한다. 복합 언어를 이용한 접근법은, 개인이 객체지향과 절차지향 패러다임의 이점을 모두 이용하는 프로그램을 사용하도록 하며, 기존의 C 코드로의 인터페이스가 훨씬 쉽다. 반면 복합 언어가 아닌 순수한 언어는 이해가 쉽다 (Rosson & Alpert, 1990).<br />
<br />
<br />
<br />
'''객체로서의 클래스'''<br />
<br />
모든 것을 객체로 간주하는 Smalltalk 에서 Class 는 first-class 런타임 객체를 의미한다. 이 사실은 메시지 송수신이 가능하며, 일반적으로 어떤 연산이라도 포함<sup>participating</sup>시킬 수 있음을 의미<ref name="역자주1">Smalltalk 에서 모든 클래스는 객체의 unique instance 라는 사실</ref>한다. C++ 에서는 이런것이 불가능하다. Smalltalk 에서의 인스턴스 생성은 클래스 객체가 수행하는 업무 중의 하나지만, C++ 에는 언어 자체에 내장되어 있다. 따라서 Smalltalk 에서는 인스턴스의 생성과 동작간의 차이가 덜 명확하다: 가장 기본 형태에서 인스턴스 생성은 특수 동작(specialization of behavior)에 불과하지만, C++ 에서는 엄밀히 별도 구분된다. 대부분의 패턴이, Smalltalk 안에서 완전한 패턴(Abstract Factory, Singleton, Factory Method 패턴 등)의 형태를 한 객체를 클래스로 포함하고 있다는 것을 알 수 있다.<br />
<br />
<br />
<br />
'''성숙하고 포괄적인 클래스 라이브러리'''<br />
<br />
주요 Smalltalk 환경의 이점 중 하나로서, 수 년 간 다듬고 디버깅해온 거대한 기본 클래스<sup>base class</sup> 세트가 있다. 라이브러리의 광범위한 사용 결과, 라이브러리에 포함된 저수준의 추상 데이터 타입마저 시간이 지나면서 향상된 덕분에 광범위한 성능을 가지게 되었다. 이렇게 광범위한 기능성은 특정 설계 고려사항은 물론이며, 심지어 일부 디자인 패턴 구현에 대한 고려사항조차 없애버린다. 예를 들어, Smalltalk 에서는 기본 Collection 클래스가 자체 반복<sup>iteration</sup> 메서드를 제공하기 때문에. 내부 반복자<sup>Iterator</sup>를 설계하거나 구현할 필요가 없다. Composite 를 포함한 다른 패턴들도 광범위한 기본 클래스 라이브러리의 기능성을 재사용하기 때문에 혜택을 받을 수 있다. '모든 것은 객체다'라는 주제로 다시 돌아가보면, 숫자, 문자열, Collection과 같은 추상 데이터 타입은 언어 자체에 내장된 블랙박스 데이터 타입이 아니라, 기본 클래스 라이브러리에서 사용자가 수정이 가능한 클래스로서 구현된다. 즉 이러한 클래스 내에서 사용자만의 메서드를 정의함으로써 객체의 기능성을 향상시킬 수 있음을 의미한다. 예를 들어, 새로운 타입의 반복자가 필요하다면 새로운 Iterator 클래스를 정의하기보다는 Collection 에서 새로운 메서드를 작성할 수 있다.<br />
<br />
<br />
<br />
'''강한 타이핑 대 약한 타이핑'''<br />
<br />
C++ 는 강한 타이핑 유형의 언어이다; 모든 변수는 컴파일러에 선언되며, 특정 타입이나 특정 클래스에 속한다. Smalltalk 는 더 넓은 범위의 동적 (또는 지연) 바인딩의 형태를 이용한다. 변수는 특정 클래스의 것으로 선언되지 않으며, runtimeㅡ객체가 실제로 인스턴스화되어 변수에 의해 참조될 때ㅡ으로 시작되기 전까지는 특정 타입(클래스)와 관련되지 않는다. <br />
<br />
두 언어 모두, 어떤 단일 변수도 서로 다른 시점에서 서로 다른 클래스의 인스턴스를 가리킬 수 있다. 그러나 Smalltalk 에서는 전체 계층구조 안의 어떠한 클래스라도 그에 해당되는 인스턴스가 될 수 있지만, C++ 에서는 특정 기반 클래스 또는 그 클래스에서 비롯된 하위클래스의 인스턴스가 된다. Collection 내의 객체에서도 마찬가지다. C++ 의 경우 목록의 모든 객체는 특정 타입이나 클래스에(또는 그 하위클래스) 속해야 하는 반면, Smalltalk 의 경우에는 일반적으로 Collection 의 어떤 클래스라도 여러 종류의 인스턴스를 포함할 수 있다. 이러한 특징은 많은 패턴에서 중요하다. 예를 들어, Smalltalk 에서 Iterator 는 내부가 다형적이기 때문에 더 강력하며, element type 마다 서로 다른 유형의 Iterator 를 정의할 필요가 없다. 강한 타이핑과 약한 타이핑은 Composite, Command, Adapter 패턴에서도 비슷한 역할을 한다. 마지막 예로서, C++ 의 Adapter 는, 해당 Adaptee 의 유형을 선언해야 하기 때문에, 선언된 클래스 또는 그 하위클래스의 객체만 조정해야 하지만, Smalltalk 에서는 Adapter 가 Adaptee 로 보낸 메시지를 포함하는 인터페이스를 가진 어떤 클래스에도 Adaptee 는 소속될 수 있다.<br />
<br />
약한 타이핑 언어는 유연성이 크지만 강한 타이핑 언어의 이점은 이보다 훨씬 더 많다. 예를 들어, 약한 타이핑이란 타입 안전성이 떨어짐을 의미한다. 게다가 변수가 구체적인 클래스에 속하도록 선언하는 경우, 프로그램의 포괄적인 정적 분석과 컴파일 시간의 최적 활용성을 제공한다.<br />
<br />
<br />
<br />
'''블록(Block)'''<br />
<br />
블록<sup>block</sup>은 자신이 코드를 실행하라는 메시지를 수신하기 전까지는 실행되지 않는, Smalltalk 코드를 포함하는 객체이다. 즉, 코드를 포함한 블록은, 일반적으로 언어 명령문(statement)처럼 한 가지씩의 방법이 연속적으로 만나면 발생(sequential encounter)하는 것이 아니라 블록에게 명시적으로 value 값 메시지를 (또는 그것의 변형체로) 전송할 때까지 대기된다. 블록은 하나의 객체이기 때문에 코드로 생성시켜 다른 객체로 전달할 수 있으며, 따라서 블록은 다른 객체에 의해 평가<sup>evalute</sup>되도록 사용자가 코드의 일부를 밀어낼 수 있도록 해주는데, 이는 구체적인 상황이나 상태가 발생할 경우로 제한된다. 이러한 특징은 대부분의 경우 매우 유용하며 Iterator 와 같은 패턴에 사용되는 것을 확인하게될 것이다. 블록은 또한 특이한 코드를 하나의 클래스의 각 인스턴스에 연결시키는데도 효율적이다(행위를 클래스의 모든 인스턴스에 적용하는 메서드와 반대로). 이는 Adapter 패턴의 대체 가능한 어댑터에 적용된다. 심지어 블록 구조는 제어구조를 컴파일러가 정해진 조건문이나 루프 구조로 제한하기보다는 언어 내에서 우리 고유의 제어 구조를 정의하도록 해준다 (Ungar & Smith, 1987). C++를 포함해 대부분 언어에는 코드의 일부를 메시지에 응답할 수 있는 first-class 객체로 만드는 구조가 없다. <br />
<br />
<br />
<br />
'''반영(refelection)과 메타수준의 성능'''<br />
<br />
스몰토크는 스몰토크 환경 자체에 대한 정보를 얻을 수 있는 코드를 작성하도록 허용한다. 클래스와 메서드는 기본 클래스 라이브러리에 존재하므로 프로그램이 클래스 계층구조 내의 클래스 간의 관계, 최근 실행한 프로세스의 메서드, 또는 특정 클래스의 인스턴스가 이해하는 메시지를 검색하도록 허용한다. 이러한 성능은 스몰토크 프로그래머의 툴킷에서 중요한 구성요소가 된다. 이는 개발환경 자체의 반영적 도구를ㅡ클래스와 메서드 브라우저, 디버거ㅡSmalltalk 에 내장시킬 수 있게 한다. 이런 코드들은 클래스 라이브러리에 포함되어 있기 때문에 추후 필요에 따라 도구를 개량 및 재정의하거나, 프로그램의 이해를 위해 새로운 도구로 통합시킬 수도 있다(예: Carroll et al., 1990). 다시 한 번 언급하지만 개인의 기호에 따라 다르다. Smalltalk 사용자들은 Smalltalk 환경으로 새로운 프로그래밍 도구를 구축하고 통합시키는 반영적 성능의 사용에 익숙하다. <br />
<br />
예를 들어, 메타 수준의 구조인 doesNotUnderstand: 를 이용해서, 사용자는 좀 더 향상된 기능성으로 객체를 "꾸미는" 목적, 또는 하나의 객체가 다른 기계 또는 데이터베이스의 다른 위치에 존재하는 객체에 대해 Proxy 의 역할을 하기 위한 목적으로 만들어진 모든 메시지를 가로챈 뒤에, 원하는 외부 객체로 이 메시지의 전송 여부와 시기를 결정할 수 있다. 또한 메시지 선택기<sup>selector</sup>를 기호 형식으로 저장해서, 내장된 perform: 메시지(와 그 변형체)를 이용해 객체로 그 메시지를 언제라도 호출할 수도 있다. 이는 함수 포인터를 사용해서 함수를 호출하는 C++ 의 기능과 유사합니다. 다른점이라면 Smalltalk 버전에서 메시지 서명의 기호<sup>symbolic</sup> 표현을 사용할 수 있다는 것입니다. Adapter, Observer, Command 같은 패턴의 Smalltalk 구현에 사용되는 perform: 이 표시되며 선택기의 기호 버전을 사용할 수 있다는 사실이 인터프리터에서 역할을 수행하게 됩니다.<br />
<br />
<br />
<br />
'''상속 의미론(Inheritance Semantics)'''<br />
<br />
2 개의 프로그래밍 언어에서 상속의 작용 방식에는 몇 가지 차이가 있는데, 그 중에 두 가지에 대해 알아보자. 첫째, C++ 는 다중 상속을 지원하지만, Smalltalk 는 클래스에 하나의 직접 상위클래스만 허용한다. 다중 상속은 몇 가지 문제에 대해 즉각적인 해법(예: [디자인 패턴]에서 Adapter 패턴에 대한 class adapter 버전)을 제공한다. Smalltalk 의 경우, 그러한 문제들에 대한 해법을 대안으로 마련해야 한다. 다중 상속은 프로그래밍 언어에 복잡성을 더한다; 프로그래머들은 이름의 충돌을 어떻게 처리하는지, 반복된 종속을 처리하기 위해 어떠한 규약이 준비되어 있는지를 알아야 한다(예: 동일한 상위클래스에서 상속된 두 클래스로부터의 상속). 다중 상속은 복잡성과 유용성 사이의 균형으로 인해 현대 Smalltalk 환경에서는 고의적으로 빠져있다. 다중 상속의 유용성이 그 사용으로 인한 추가적 복잡성보다 크지 않다고 생각했기 때문이다(특히 프로그램의 이해적 측면에서 볼 때). <br />
<br />
둘째, C++ 의 경우 함수의 동적 바인딩(상위클래스에서 선언되었으며, 하나 또는 그 이상의 하위클래스에 오버라이드된 경우)은 함수가 상위클래스에 가상으로 선언된 경우에만 작동된다. Rumbaugh et al. (1991)은 이런 특성이 확장성 및 점진적 재사용(특수화를 위해 다른 메서드를 오버라이딩하면서 상속을 통해 상위클래스의 행위 일부를 재사용)에 장애물이 될 수 있다고 지적했다. 처음으로 클래스<ref name="역자주2">일반적인 프로그래밍에서 부모클래스가 되는 클래스</ref>를 작성한 프로그래머가 메서드를 가상으로 선언하는 유일한 이유는, 아직 정의되지 않은 하위클래스에서 해당 연산을 오버라이드할 가능성이 있을때 뿐이며, 이런 방식의 선언은 클래스에 선언되는 각 함수별로 적용된다<ref name="역자주3">C++ 에서는 클래스 자체에는 virtual keyword 가 없으며(상속을 받으면 되는거니), 각 메서드 에서만 virtual 을 선언할 수 있기때문. 왜 이런 명시적 선언이 필요한가 하면, 첫번째로, 상속될 가능성이 있는 놈의 소멸자를 virtrual 로 선언하는 이유는, 객체 소멸의 순서를 컴파일러가 지킬 수 있도록 해주는것. 두번째는, VT(Virtual Table)을 생성할지를 컴파일러가 빨리 계산할 수 있기 때문이며, 그 외에도 virtual abc() = 0; 같은 순수 가상 함수를 만들어서, 상속받은 클래스에서 반드시 해당 함수를 구현하도록 강제할 수도 있고, 오버로드와 오버라이드를 명시적으로 구분할 수도 있기 때문.</ref>.(Rumbaugh et al., 1991).<br />
<br />
<br />
<br />
'''대화형 개발 환경'''<br />
<br />
주요 Smalltalk 제품에서의 Smalltalk 개발은 항상 대화형 개발 환경에서 이루어진다. 여기에는 많은 의미가 함축되어 있는데, 쉬운 실험, 검사, 프로그램 이해를 위한 도구의 이용성, 재사용 가능한 클래스와 메서드 발견등이 있다.. 설계와 관련해서, 개발 환경은 각 메서드를 저장할때, 대상이 되는 메서드에 대한 증분 컴파일을 제공하기 때문에 하나의 설계로 결정되어 버리는걸 피할 수 있으며, Smalltalk 프로그래머들은 클래스에서 메서드를 수정하거나 추가하기 위해 전체 클래스의 소스를 재컴파일하거나 갖출 필요가 없다. 따라서 대체 어디에 새로운 기능이 위치해야 하는지에 관한 걱정을 덜 수 있다. 예를 들어, [디자인 패턴] 에서 Visitor 패턴을 사용하는 이유 대해, 새로운 동작의 추가로 인해 관련된 모든 클래스를 재컴파일해야 하는 경우를 소개하는데, 이런 경우라면 기능을 새로운 클래스에 위치시키고 기존 클래스의 인스턴스에 작용하도록 만드는 편이 낫다. 증분 컴파일은 주제와 관련 없는 문제를 제거함으로써 책임 중심의 설계를 허용하기 때문에, 코드를 기능적으로 또는 논리적으로 위치시킬 수 있다. 예를 들어, [디자인 패턴] 에서는, Visitor 패턴을 적용하는 이유중에 하나로서, 새로운 동작의 추가로 인해 관련된 모든 클래스를 재컴파일해야 하는 경우를 소개하는데, 이런 경우에서 C++ 는 기능을 새로운 클래스에 위치시키고 기존 클래스의 인스턴스에 작용하도록 만드는 편이 낫다. Smalltalk 에서 제공되는 증분 컴파일은 주제와 관련 없는 문제를 제거함으로써 코드를 기능적으로 또는 논리적으로 위치시킬 수 있기 때문에, 책임 중심의 설계를 허용한다.<br />
<br />
<br />
<br />
'''추상 클래스, private 메서드'''<br />
<br />
C++ 는 언어기반 특성을 포함해서 Smalltalk 에서는 지원하지 않는 바람직한 특성을 명시적으로ㅡ그리고 강제적ㅡ구현한다. 예를 들어 C++ 와 달리 Smalltalk 는 메서드의 privacy 를 선언하고 강요하는 compile-time 메커니즘을 제공하지 않는다. 프로그래머들이 메서드를 private 으로 작성할 수는 있지만 (예: comment) 외부 객체가 실제로 그 메서드를 호출하지 못하도록 막는 내장된 메커니즘은 없다. 비슷한 경우로 Smalltalk 에는 추상적이어야 하는 클래스의 인스턴스화를 막는 메커니즘이 없다. Smalltalk 에서 C++ privacy 메커니즘에 대한 런타임 대체<sup>runtime substitute</sup>를 필요로 하기 때문에 위의 문제보다 더 복잡한 해법을 필요로 하는 Singleton 과 같은 패턴에서 Smalltalk 이 어떤 역할을 하는지 살펴볼 것이다. <br />
<br />
두 언어 간에는 이 외에도 많은 차이점이 있고, 각각의 장단점이 있다. 전체적으로 보면, C++ 는 효율성을 중요시하고 프로그래머가 오류를 피하도록 고안되었으며(Rumbaugh et al., 1991) Smalltalk 는 좀 더 유연하게 사용할 수 있도록 개발되었다. 다시 말하지만, 둘 중 하나가 어떤 면에서 "더 낫다"고 하고싶은건 아니다. 오히려 중요한 부분은 이 2개의 프로그래밍 언어가 '''서로 다르다는 것'''이며, Smalltalk 프로그래머와 C++ 프로그래머는 디자인패턴을 다르게 설명할 가능성이 있다는것이다. 따라서 이 책의 목표는 디자인 패턴의 23가지 패턴에 대한 Smalltalker 의 관점을 제공하는 것이다.<br />
<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:DesignPatternSmalltalkCompanion]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=DesignPatternSmalltalkCompanion:1.3&diff=5581DesignPatternSmalltalkCompanion:1.32018-07-24T07:40:39Z<p>Onionmixer: C++ 의 클래스 선언과 virtual 키워드에 대한 주석 추가</p>
<hr />
<div>===1.3 C++ != Smalltalk (또는 Smalltalk ~= C++)===<br />
<br />
스몰토크와 C++는 단순히 서로 다른 프로그래밍 언어가 아니다; 언어에 대한 설계와 언어로 프로그래밍하는 데에는 기본적인 차이가 있다. 둘 중 하나로 작업하는데 익숙한 설계자들은 디자인 문제와 해법을 서로 다르게 바라볼 것이다.<br />
<br />
개발자가 작업하는 언어가 문제 해법에 대해 생각하는 방식에 영향을 미칠 것이라는 주장을 자세히 살펴보자. 자연언어를 다루는 심리 언어학 영역을 가장 높은 수준에서 살펴보면, 언어와 사고는 서로 밀접하게 연결되어 있으며 서로 영향을 미친다는 사실을 재빠르게 확인할 수 있다. Benjamin Whorf (1956)는 "언어는 개인이 세상을 바라보는 방식 또는 개인이 사고하는 방식에 확실히 영향을 미친다"고 가정했다. 이러한 워프의 가설(Whorfian hypothesis)이 프로그래밍 언어에도 통용된다고 주장하는 사람들도 있음은 쉽게 짐작할 수 있다: 서로 다른 언어에 대한 다른 구문과 제어구조는 그 언어로 문제를 해결하는 방식에 영향을 미치는 것이다 (예: Curtis, 1985, 특히 6장 참조).<br />
<br />
이와 반대로 생각의 방식과 세계관도 언어에 영향을 미치는 것으로 보인다. 사실 많은 심리학 연구에서는 이것이 사실이라는 증거를 제시해왔다 (예: Anderson, 1985 참조). 따라서 Goldberg와 Kay (1977)가 주장한 바와 같이 객체지향 언어들은, 설계자들이 실세계에 대한 그들의 인식을 설명하는 모델에 더 가깝게 구축하도록 하기 때문에 "진화되었다"는 말이 사실일지도 모르겠다. 여러 저자들이 제시하였듯이 객체지향 디자인은 사람들이 문제 영역을 자연스럽게 모형화하는데 더 나은 짝을 제공하므로 절차지향식 언어의 설계보다는 좀 더 "자연적이다" (Rosson & Alpert, 1990; Cox, 1984). <br />
<br />
앞의 두 가지 관점을 모두 뒷받침하는 Soloway, Bonar, Ehrlich (1983)는 사람들이 프로그래밍 해법의 설계와 관련해 자연스럽게 생각하는 방식과 더 일치하는 인식을 제공하는 프로그래밍 언어가 보다 사용하기 수월하다고 설명하였다. 하지만 그들은 프로그래밍 언어가 개인이 선호하는 설계 전략을 변화시키며 특정 언어의 구성에 경험이 많을수록 디자인 선호도가 바뀐다는 사실도 발견했다. 결론은, 누군가가 구현하는 언어ㅡC++, Smalltalk 등으로ㅡ는 시스템과 애플리케이션에 대한 사고 및 설계 방식에 영향을 미칠것이라는 사실이다. Smalltalk 전문가들과 C++ 개발자들은 서로 다른 언어로 프로그램을 구성하는데 그치는 것이 아니라, 서로 다른 언어로 말한다. 예를 들어 Smalltalk 프로그래머에게 클래스 객체는 클래스를 나타내는 진실된 스몰토크 객체인 반면, C++ 개발자들에게 클래스 객체란 사용자가 정의한 클래스의 인스턴스이다 (즉 내장된 C-언어 데이터 타입이 아니라는 의미다). 두 언어와 환경 간에 개념적으로 중복되는 부분이 상당히 많음에도 불구하고, 서로 상당히 다른 의견을 포함하고, 다른 문제를 표면화시키며, 최종적으로 프로그램 설계에 대한 서로 다른 사고를 이끌어 낸다.<br />
<br />
반대로 설계 시점에서 목표 언어를 고려할 필요도 있다: "설계 시점에서 언어를 고려하지 않을 경우, 문제를 미해결 상태로 남길 수 있으며… 그러한 설계로 인해 형편 없는 프로그램이 되기도 한다" (Smith, 1996a). 목표 언어를 선택할 때 발생할 수 있는 제약과 기회를 고려하지 않고 설계하는 것은 실수일 것이다. 스몰토크의 경우 언어 자체뿐 아니라 클래스 라이브러리와 내장된 프레임워크도 고려해야 한다. 예를 들어, VisualWorks 의 Mode-View-Controller 프레임워크, Visual Smalltalk 의 Mode-Pane 프레임워크, IBM Smalltalk 의 Motif-style 상호작용 프레임워크를 생각해보자ㅡ각 프레임워크는 실제로 대화형 애플리케이션의 설계를 시작하기도 전에 특정 디자인의 결정이 필요로 하기도 한다.<br />
<br />
이러한 생각을 명심하고 Smalltalk 과 C++ 이 어떻게 다른지ㅡ구체적으로 말해, 두 언어가 어떻게 설계에 영향을 미치는지ㅡ간단히 살펴보도록 하자. 이를 살펴보는 목적은, 두 언어에는 수많은 기본적 차이가 있기 때문에, 개발자들이 문제에 대해 생각하고, 해법을 설계하며, 디자인 패턴을 구현하는 방식이 서로 다를 수 밖에 없다는 우리의 주장을 뒷받침하기 위함이다(어느 한 언어가 뛰어나다는 주장을 하려는 것이 아니며, 두 언어의 비교를 통해 그 중 한 가지 언어를 선호하는 팬들을 언짢게 만들었다면 미리 사과드린다). 이 과정에서 구체적 특성의 유무에 따라 영향을 받을 일부 패턴을 언급하고자 한다.<br />
<br />
<br />
<br />
'''순수한 객체지향 대 혼합 객체지향'''<br />
<br />
Smalltalk 은 "순수한" 객체지향 언어이다. Smalltalk 에서 모든 계산은ㅡ가장 원시적 수준(과 일반 프로그래밍 활동과 관련이 없는 수준)을 제외한ㅡ객체로 전송하는 메시지의 결과로만 발생된다. 모든 것은 객체이며, 여기에는 숫자, 문자, 문자열과 같은 원시 데이터 타입<sup>Primitive Data Type</sup>을 포함한다. 반면 C++ 는 복합 언어로서, 절차지향의 C언어 기반에 객체지향의 특성들이 추가된 언어이다. 언어에서 제공되는 비객체지향적 특성을 이용할 수 있기 때문에, 설계자들과 프로그래머들에게서 서로 다른 사고방식을 이끌어 낸다. 예를 들어, C++ 에서는 어떠한 클래스에도 소속되지 않은 전역함수를 가질 수 있는 반면, Smalltalk 에서는 각 기능 부분의 책임을 누가 지는지를 반영해야만 한다. 복합 언어를 이용한 접근법은, 개인이 객체지향과 절차지향 패러다임의 이점을 모두 이용하는 프로그램을 사용하도록 하며, 기존의 C 코드로의 인터페이스가 훨씬 쉽다. 반면 복합 언어가 아닌 순수한 언어는 이해가 쉽다 (Rosson & Alpert, 1990).<br />
<br />
<br />
<br />
'''객체로서의 클래스'''<br />
<br />
모든 것을 객체로 간주하는 Smalltalk 에서 Class 는 first-class 런타임 객체를 의미한다. 이 사실은 메시지 송수신이 가능하며, 일반적으로 어떤 연산이라도 포함<sup>participating</sup>시킬 수 있음을 의미<ref name="역자주1">Smalltalk 에서 모든 클래스는 객체의 unique instance 라는 사실</ref>한다. C++ 에서는 이런것이 불가능하다. Smalltalk 에서의 인스턴스 생성은 클래스 객체가 수행하는 업무 중의 하나지만, C++ 에는 언어 자체에 내장되어 있다. 따라서 Smalltalk 에서는 인스턴스의 생성과 동작간의 차이가 덜 명확하다: 가장 기본 형태에서 인스턴스 생성은 특수 동작(specialization of behavior)에 불과하지만, C++ 에서는 엄밀히 별도 구분된다. 대부분의 패턴이, Smalltalk 안에서 완전한 패턴(Abstract Factory, Singleton, Factory Method 패턴 등)의 형태를 한 객체를 클래스로 포함하고 있다는 것을 알 수 있다.<br />
<br />
<br />
<br />
'''성숙하고 포괄적인 클래스 라이브러리'''<br />
<br />
주요 Smalltalk 환경의 이점 중 하나로서, 수 년 간 다듬고 디버깅해온 거대한 기본 클래스<sup>base class</sup> 세트가 있다. 라이브러리의 광범위한 사용 결과, 라이브러리에 포함된 저수준의 추상 데이터 타입마저 시간이 지나면서 향상된 덕분에 광범위한 성능을 가지게 되었다. 이렇게 광범위한 기능성은 특정 설계 고려사항은 물론이며, 심지어 일부 디자인 패턴 구현에 대한 고려사항조차 없애버린다. 예를 들어, Smalltalk 에서는 기본 Collection 클래스가 자체 반복<sup>iteration</sup> 메서드를 제공하기 때문에. 내부 반복자<sup>Iterator</sup>를 설계하거나 구현할 필요가 없다. Composite 를 포함한 다른 패턴들도 광범위한 기본 클래스 라이브러리의 기능성을 재사용하기 때문에 혜택을 받을 수 있다. '모든 것은 객체다'라는 주제로 다시 돌아가보면, 숫자, 문자열, Collection과 같은 추상 데이터 타입은 언어 자체에 내장된 블랙박스 데이터 타입이 아니라, 기본 클래스 라이브러리에서 사용자가 수정이 가능한 클래스로서 구현된다. 즉 이러한 클래스 내에서 사용자만의 메서드를 정의함으로써 객체의 기능성을 향상시킬 수 있음을 의미한다. 예를 들어, 새로운 타입의 반복자가 필요하다면 새로운 Iterator 클래스를 정의하기보다는 Collection 에서 새로운 메서드를 작성할 수 있다.<br />
<br />
<br />
<br />
'''강한 타이핑 대 약한 타이핑'''<br />
<br />
C++ 는 강한 타이핑 유형의 언어이다; 모든 변수는 컴파일러에 선언되며, 특정 타입이나 특정 클래스에 속한다. Smalltalk 는 더 넓은 범위의 동적 (또는 지연) 바인딩의 형태를 이용한다. 변수는 특정 클래스의 것으로 선언되지 않으며, runtimeㅡ객체가 실제로 인스턴스화되어 변수에 의해 참조될 때ㅡ으로 시작되기 전까지는 특정 타입(클래스)와 관련되지 않는다. <br />
<br />
두 언어 모두, 어떤 단일 변수도 서로 다른 시점에서 서로 다른 클래스의 인스턴스를 가리킬 수 있다. 그러나 Smalltalk 에서는 전체 계층구조 안의 어떠한 클래스라도 그에 해당되는 인스턴스가 될 수 있지만, C++ 에서는 특정 기반 클래스 또는 그 클래스에서 비롯된 하위클래스의 인스턴스가 된다. Collection 내의 객체에서도 마찬가지다. C++ 의 경우 목록의 모든 객체는 특정 타입이나 클래스에(또는 그 하위클래스) 속해야 하는 반면, Smalltalk 의 경우에는 일반적으로 Collection 의 어떤 클래스라도 여러 종류의 인스턴스를 포함할 수 있다. 이러한 특징은 많은 패턴에서 중요하다. 예를 들어, Smalltalk 에서 Iterator 는 내부가 다형적이기 때문에 더 강력하며, element type 마다 서로 다른 유형의 Iterator 를 정의할 필요가 없다. 강한 타이핑과 약한 타이핑은 Composite, Command, Adapter 패턴에서도 비슷한 역할을 한다. 마지막 예로서, C++ 의 Adapter 는, 해당 Adaptee 의 유형을 선언해야 하기 때문에, 선언된 클래스 또는 그 하위클래스의 객체만 조정해야 하지만, Smalltalk 에서는 Adapter 가 Adaptee 로 보낸 메시지를 포함하는 인터페이스를 가진 어떤 클래스에도 Adaptee 는 소속될 수 있다.<br />
<br />
약한 타이핑 언어는 유연성이 크지만 강한 타이핑 언어의 이점은 이보다 훨씬 더 많다. 예를 들어, 약한 타이핑이란 타입 안전성이 떨어짐을 의미한다. 게다가 변수가 구체적인 클래스에 속하도록 선언하는 경우, 프로그램의 포괄적인 정적 분석과 컴파일 시간의 최적 활용성을 제공한다.<br />
<br />
<br />
<br />
'''블록(Block)'''<br />
<br />
블록<sup>block</sup>은 자신이 코드를 실행하라는 메시지를 수신하기 전까지는 실행되지 않는, Smalltalk 코드를 포함하는 객체이다. 즉, 코드를 포함한 블록은, 일반적으로 언어 명령문(statement)처럼 한 가지씩의 방법이 연속적으로 만나면 발생(sequential encounter)하는 것이 아니라 블록에게 명시적으로 value 값 메시지를 (또는 그것의 변형체로) 전송할 때까지 대기된다. 블록은 하나의 객체이기 때문에 코드로 생성시켜 다른 객체로 전달할 수 있으며, 따라서 블록은 다른 객체에 의해 평가<sup>evalute</sup>되도록 사용자가 코드의 일부를 밀어낼 수 있도록 해주는데, 이는 구체적인 상황이나 상태가 발생할 경우로 제한된다. 이러한 특징은 대부분의 경우 매우 유용하며 Iterator 와 같은 패턴에 사용되는 것을 확인하게될 것이다. 블록은 또한 특이한 코드를 하나의 클래스의 각 인스턴스에 연결시키는데도 효율적이다(행위를 클래스의 모든 인스턴스에 적용하는 메서드와 반대로). 이는 Adapter 패턴의 대체 가능한 어댑터에 적용된다. 심지어 블록 구조는 제어구조를 컴파일러가 정해진 조건문이나 루프 구조로 제한하기보다는 언어 내에서 우리 고유의 제어 구조를 정의하도록 해준다 (Ungar & Smith, 1987). C++를 포함해 대부분 언어에는 코드의 일부를 메시지에 응답할 수 있는 first-class 객체로 만드는 구조가 없다. <br />
<br />
<br />
<br />
'''반영(refelection)과 메타수준의 성능'''<br />
<br />
스몰토크는 스몰토크 환경 자체에 대한 정보를 얻을 수 있는 코드를 작성하도록 허용한다. 클래스와 메서드는 기본 클래스 라이브러리에 존재하므로 프로그램이 클래스 계층구조 내의 클래스 간의 관계, 최근 실행한 프로세스의 메서드, 또는 특정 클래스의 인스턴스가 이해하는 메시지를 검색하도록 허용한다. 이러한 성능은 스몰토크 프로그래머의 툴킷에서 중요한 구성요소가 된다. 이는 개발환경 자체의 반영적 도구를ㅡ클래스와 메서드 브라우저, 디버거ㅡSmalltalk 에 내장시킬 수 있게 한다. 이런 코드들은 클래스 라이브러리에 포함되어 있기 때문에 추후 필요에 따라 도구를 개량 및 재정의하거나, 프로그램의 이해를 위해 새로운 도구로 통합시킬 수도 있다(예: Carroll et al., 1990). 다시 한 번 언급하지만 개인의 기호에 따라 다르다. Smalltalk 사용자들은 Smalltalk 환경으로 새로운 프로그래밍 도구를 구축하고 통합시키는 반영적 성능의 사용에 익숙하다. <br />
<br />
예를 들어, 메타 수준의 구조인 doesNotUnderstand: 를 이용해서, 사용자는 좀 더 향상된 기능성으로 객체를 "꾸미는" 목적, 또는 하나의 객체가 다른 기계 또는 데이터베이스의 다른 위치에 존재하는 객체에 대해 Proxy 의 역할을 하기 위한 목적으로 만들어진 모든 메시지를 가로챈 뒤에, 원하는 외부 객체로 이 메시지의 전송 여부와 시기를 결정할 수 있다. 또한 메시지 선택기<sup>selector</sup>를 기호 형식으로 저장해서, 내장된 perform: 메시지(와 그 변형체)를 이용해 객체로 그 메시지를 언제라도 호출할 수도 있다. 이는 함수 포인터를 사용해서 함수를 호출하는 C++ 의 기능과 유사합니다. 다른점이라면 Smalltalk 버전에서 메시지 서명의 기호<sup>symbolic</sup> 표현을 사용할 수 있다는 것입니다. Adapter, Observer, Command 같은 패턴의 Smalltalk 구현에 사용되는 perform: 이 표시되며 선택기의 기호 버전을 사용할 수 있다는 사실이 인터프리터에서 역할을 수행하게 됩니다.<br />
<br />
<br />
<br />
'''상속 의미론(Inheritance Semantics)'''<br />
<br />
2 개의 프로그래밍 언어에서 상속의 작용 방식에는 몇 가지 차이가 있는데, 그 중에 두 가지에 대해 알아보자. 첫째, C++ 는 다중 상속을 지원하지만, Smalltalk 는 클래스에 하나의 직접 상위클래스만 허용한다. 다중 상속은 몇 가지 문제에 대해 즉각적인 해법(예: [디자인 패턴]에서 Adapter 패턴에 대한 class adapter 버전)을 제공한다. Smalltalk 의 경우, 그러한 문제들에 대한 해법을 대안으로 마련해야 한다. 다중 상속은 프로그래밍 언어에 복잡성을 더한다; 프로그래머들은 이름의 충돌을 어떻게 처리하는지, 반복된 종속을 처리하기 위해 어떠한 규약이 준비되어 있는지를 알아야 한다(예: 동일한 상위클래스에서 상속된 두 클래스로부터의 상속). 다중 상속은 복잡성과 유용성 사이의 균형으로 인해 현대 Smalltalk 환경에서는 고의적으로 빠져있다. 다중 상속의 유용성이 그 사용으로 인한 추가적 복잡성보다 크지 않다고 생각했기 때문이다(특히 프로그램의 이해적 측면에서 볼 때). <br />
<br />
둘째, C++ 의 경우 함수의 동적 바인딩(상위클래스에서 선언되었으며, 하나 또는 그 이상의 하위클래스에 오버라이드된 경우)은 함수가 상위클래스에 가상으로 선언된 경우에만 작동된다. Rumbaugh et al. (1991)은 이런 특성이 확장성 및 점진적 재사용(특수화를 위해 다른 메서드를 오버라이딩하면서 상속을 통해 상위클래스의 행위 일부를 재사용)에 장애물이 될 수 있다고 지적했다. 처음으로 클래스<ref name="역자주2">일반적인 프로그래밍에서 부모클래스가 되는 클래스</ref>를 작성한 프로그래머가 메서드를 가상으로 선언하는 유일한 이유는, 아직 정의되지 않은 하위클래스에서 해당 연산을 오버라이드할 가능성이 있을때 뿐이며, 이런 방식의 선언은 클래스에 선언되는 각 함수별로 적용된다<ref name="역자주3">C++ 에서는 클래스 자체에는 virtual keyword는 없으며, 각 메서드 별로 virtual 을 선언할 수 있기때문.</ref>.(Rumbaugh et al., 1991).<br />
<br />
<br />
<br />
'''대화형 개발 환경'''<br />
<br />
주요 Smalltalk 제품에서의 Smalltalk 개발은 항상 대화형 개발 환경에서 이루어진다. 여기에는 많은 의미가 함축되어 있는데, 쉬운 실험, 검사, 프로그램 이해를 위한 도구의 이용성, 재사용 가능한 클래스와 메서드 발견등이 있다.. 설계와 관련해서, 개발 환경은 각 메서드를 저장할때, 대상이 되는 메서드에 대한 증분 컴파일을 제공하기 때문에 하나의 설계로 결정되어 버리는걸 피할 수 있으며, Smalltalk 프로그래머들은 클래스에서 메서드를 수정하거나 추가하기 위해 전체 클래스의 소스를 재컴파일하거나 갖출 필요가 없다. 따라서 대체 어디에 새로운 기능이 위치해야 하는지에 관한 걱정을 덜 수 있다. 예를 들어, [디자인 패턴] 에서 Visitor 패턴을 사용하는 이유 대해, 새로운 동작의 추가로 인해 관련된 모든 클래스를 재컴파일해야 하는 경우를 소개하는데, 이런 경우라면 기능을 새로운 클래스에 위치시키고 기존 클래스의 인스턴스에 작용하도록 만드는 편이 낫다. 증분 컴파일은 주제와 관련 없는 문제를 제거함으로써 책임 중심의 설계를 허용하기 때문에, 코드를 기능적으로 또는 논리적으로 위치시킬 수 있다. 예를 들어, [디자인 패턴] 에서는, Visitor 패턴을 적용하는 이유중에 하나로서, 새로운 동작의 추가로 인해 관련된 모든 클래스를 재컴파일해야 하는 경우를 소개하는데, 이런 경우에서 C++ 는 기능을 새로운 클래스에 위치시키고 기존 클래스의 인스턴스에 작용하도록 만드는 편이 낫다. Smalltalk 에서 제공되는 증분 컴파일은 주제와 관련 없는 문제를 제거함으로써 코드를 기능적으로 또는 논리적으로 위치시킬 수 있기 때문에, 책임 중심의 설계를 허용한다.<br />
<br />
<br />
<br />
'''추상 클래스, private 메서드'''<br />
<br />
C++ 는 언어기반 특성을 포함해서 Smalltalk 에서는 지원하지 않는 바람직한 특성을 명시적으로ㅡ그리고 강제적ㅡ구현한다. 예를 들어 C++ 와 달리 Smalltalk 는 메서드의 privacy 를 선언하고 강요하는 compile-time 메커니즘을 제공하지 않는다. 프로그래머들이 메서드를 private 으로 작성할 수는 있지만 (예: comment) 외부 객체가 실제로 그 메서드를 호출하지 못하도록 막는 내장된 메커니즘은 없다. 비슷한 경우로 Smalltalk 에는 추상적이어야 하는 클래스의 인스턴스화를 막는 메커니즘이 없다. Smalltalk 에서 C++ privacy 메커니즘에 대한 런타임 대체<sup>runtime substitute</sup>를 필요로 하기 때문에 위의 문제보다 더 복잡한 해법을 필요로 하는 Singleton 과 같은 패턴에서 Smalltalk 이 어떤 역할을 하는지 살펴볼 것이다. <br />
<br />
두 언어 간에는 이 외에도 많은 차이점이 있고, 각각의 장단점이 있다. 전체적으로 보면, C++ 는 효율성을 중요시하고 프로그래머가 오류를 피하도록 고안되었으며(Rumbaugh et al., 1991) Smalltalk 는 좀 더 유연하게 사용할 수 있도록 개발되었다. 다시 말하지만, 둘 중 하나가 어떤 면에서 "더 낫다"고 하고싶은건 아니다. 오히려 중요한 부분은 이 2개의 프로그래밍 언어가 '''서로 다르다는 것'''이며, Smalltalk 프로그래머와 C++ 프로그래머는 디자인패턴을 다르게 설명할 가능성이 있다는것이다. 따라서 이 책의 목표는 디자인 패턴의 23가지 패턴에 대한 Smalltalker 의 관점을 제공하는 것이다.<br />
<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:DesignPatternSmalltalkCompanion]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=DesignPatternSmalltalkCompanion:1.3&diff=5580DesignPatternSmalltalkCompanion:1.32018-07-24T07:37:51Z<p>Onionmixer: </p>
<hr />
<div>===1.3 C++ != Smalltalk (또는 Smalltalk ~= C++)===<br />
<br />
스몰토크와 C++는 단순히 서로 다른 프로그래밍 언어가 아니다; 언어에 대한 설계와 언어로 프로그래밍하는 데에는 기본적인 차이가 있다. 둘 중 하나로 작업하는데 익숙한 설계자들은 디자인 문제와 해법을 서로 다르게 바라볼 것이다.<br />
<br />
개발자가 작업하는 언어가 문제 해법에 대해 생각하는 방식에 영향을 미칠 것이라는 주장을 자세히 살펴보자. 자연언어를 다루는 심리 언어학 영역을 가장 높은 수준에서 살펴보면, 언어와 사고는 서로 밀접하게 연결되어 있으며 서로 영향을 미친다는 사실을 재빠르게 확인할 수 있다. Benjamin Whorf (1956)는 "언어는 개인이 세상을 바라보는 방식 또는 개인이 사고하는 방식에 확실히 영향을 미친다"고 가정했다. 이러한 워프의 가설(Whorfian hypothesis)이 프로그래밍 언어에도 통용된다고 주장하는 사람들도 있음은 쉽게 짐작할 수 있다: 서로 다른 언어에 대한 다른 구문과 제어구조는 그 언어로 문제를 해결하는 방식에 영향을 미치는 것이다 (예: Curtis, 1985, 특히 6장 참조).<br />
<br />
이와 반대로 생각의 방식과 세계관도 언어에 영향을 미치는 것으로 보인다. 사실 많은 심리학 연구에서는 이것이 사실이라는 증거를 제시해왔다 (예: Anderson, 1985 참조). 따라서 Goldberg와 Kay (1977)가 주장한 바와 같이 객체지향 언어들은, 설계자들이 실세계에 대한 그들의 인식을 설명하는 모델에 더 가깝게 구축하도록 하기 때문에 "진화되었다"는 말이 사실일지도 모르겠다. 여러 저자들이 제시하였듯이 객체지향 디자인은 사람들이 문제 영역을 자연스럽게 모형화하는데 더 나은 짝을 제공하므로 절차지향식 언어의 설계보다는 좀 더 "자연적이다" (Rosson & Alpert, 1990; Cox, 1984). <br />
<br />
앞의 두 가지 관점을 모두 뒷받침하는 Soloway, Bonar, Ehrlich (1983)는 사람들이 프로그래밍 해법의 설계와 관련해 자연스럽게 생각하는 방식과 더 일치하는 인식을 제공하는 프로그래밍 언어가 보다 사용하기 수월하다고 설명하였다. 하지만 그들은 프로그래밍 언어가 개인이 선호하는 설계 전략을 변화시키며 특정 언어의 구성에 경험이 많을수록 디자인 선호도가 바뀐다는 사실도 발견했다. 결론은, 누군가가 구현하는 언어ㅡC++, Smalltalk 등으로ㅡ는 시스템과 애플리케이션에 대한 사고 및 설계 방식에 영향을 미칠것이라는 사실이다. Smalltalk 전문가들과 C++ 개발자들은 서로 다른 언어로 프로그램을 구성하는데 그치는 것이 아니라, 서로 다른 언어로 말한다. 예를 들어 Smalltalk 프로그래머에게 클래스 객체는 클래스를 나타내는 진실된 스몰토크 객체인 반면, C++ 개발자들에게 클래스 객체란 사용자가 정의한 클래스의 인스턴스이다 (즉 내장된 C-언어 데이터 타입이 아니라는 의미다). 두 언어와 환경 간에 개념적으로 중복되는 부분이 상당히 많음에도 불구하고, 서로 상당히 다른 의견을 포함하고, 다른 문제를 표면화시키며, 최종적으로 프로그램 설계에 대한 서로 다른 사고를 이끌어 낸다.<br />
<br />
반대로 설계 시점에서 목표 언어를 고려할 필요도 있다: "설계 시점에서 언어를 고려하지 않을 경우, 문제를 미해결 상태로 남길 수 있으며… 그러한 설계로 인해 형편 없는 프로그램이 되기도 한다" (Smith, 1996a). 목표 언어를 선택할 때 발생할 수 있는 제약과 기회를 고려하지 않고 설계하는 것은 실수일 것이다. 스몰토크의 경우 언어 자체뿐 아니라 클래스 라이브러리와 내장된 프레임워크도 고려해야 한다. 예를 들어, VisualWorks 의 Mode-View-Controller 프레임워크, Visual Smalltalk 의 Mode-Pane 프레임워크, IBM Smalltalk 의 Motif-style 상호작용 프레임워크를 생각해보자ㅡ각 프레임워크는 실제로 대화형 애플리케이션의 설계를 시작하기도 전에 특정 디자인의 결정이 필요로 하기도 한다.<br />
<br />
이러한 생각을 명심하고 Smalltalk 과 C++ 이 어떻게 다른지ㅡ구체적으로 말해, 두 언어가 어떻게 설계에 영향을 미치는지ㅡ간단히 살펴보도록 하자. 이를 살펴보는 목적은, 두 언어에는 수많은 기본적 차이가 있기 때문에, 개발자들이 문제에 대해 생각하고, 해법을 설계하며, 디자인 패턴을 구현하는 방식이 서로 다를 수 밖에 없다는 우리의 주장을 뒷받침하기 위함이다(어느 한 언어가 뛰어나다는 주장을 하려는 것이 아니며, 두 언어의 비교를 통해 그 중 한 가지 언어를 선호하는 팬들을 언짢게 만들었다면 미리 사과드린다). 이 과정에서 구체적 특성의 유무에 따라 영향을 받을 일부 패턴을 언급하고자 한다.<br />
<br />
<br />
<br />
'''순수한 객체지향 대 혼합 객체지향'''<br />
<br />
Smalltalk 은 "순수한" 객체지향 언어이다. Smalltalk 에서 모든 계산은ㅡ가장 원시적 수준(과 일반 프로그래밍 활동과 관련이 없는 수준)을 제외한ㅡ객체로 전송하는 메시지의 결과로만 발생된다. 모든 것은 객체이며, 여기에는 숫자, 문자, 문자열과 같은 원시 데이터 타입<sup>Primitive Data Type</sup>을 포함한다. 반면 C++ 는 복합 언어로서, 절차지향의 C언어 기반에 객체지향의 특성들이 추가된 언어이다. 언어에서 제공되는 비객체지향적 특성을 이용할 수 있기 때문에, 설계자들과 프로그래머들에게서 서로 다른 사고방식을 이끌어 낸다. 예를 들어, C++ 에서는 어떠한 클래스에도 소속되지 않은 전역함수를 가질 수 있는 반면, Smalltalk 에서는 각 기능 부분의 책임을 누가 지는지를 반영해야만 한다. 복합 언어를 이용한 접근법은, 개인이 객체지향과 절차지향 패러다임의 이점을 모두 이용하는 프로그램을 사용하도록 하며, 기존의 C 코드로의 인터페이스가 훨씬 쉽다. 반면 복합 언어가 아닌 순수한 언어는 이해가 쉽다 (Rosson & Alpert, 1990).<br />
<br />
<br />
<br />
'''객체로서의 클래스'''<br />
<br />
모든 것을 객체로 간주하는 Smalltalk 에서 Class 는 first-class 런타임 객체를 의미한다. 이 사실은 메시지 송수신이 가능하며, 일반적으로 어떤 연산이라도 포함<sup>participating</sup>시킬 수 있음을 의미<ref name="역자주1">Smalltalk 에서 모든 클래스는 객체의 unique instance 라는 사실</ref>한다. C++ 에서는 이런것이 불가능하다. Smalltalk 에서의 인스턴스 생성은 클래스 객체가 수행하는 업무 중의 하나지만, C++ 에는 언어 자체에 내장되어 있다. 따라서 Smalltalk 에서는 인스턴스의 생성과 동작간의 차이가 덜 명확하다: 가장 기본 형태에서 인스턴스 생성은 특수 동작(specialization of behavior)에 불과하지만, C++ 에서는 엄밀히 별도 구분된다. 대부분의 패턴이, Smalltalk 안에서 완전한 패턴(Abstract Factory, Singleton, Factory Method 패턴 등)의 형태를 한 객체를 클래스로 포함하고 있다는 것을 알 수 있다.<br />
<br />
<br />
<br />
'''성숙하고 포괄적인 클래스 라이브러리'''<br />
<br />
주요 Smalltalk 환경의 이점 중 하나로서, 수 년 간 다듬고 디버깅해온 거대한 기본 클래스<sup>base class</sup> 세트가 있다. 라이브러리의 광범위한 사용 결과, 라이브러리에 포함된 저수준의 추상 데이터 타입마저 시간이 지나면서 향상된 덕분에 광범위한 성능을 가지게 되었다. 이렇게 광범위한 기능성은 특정 설계 고려사항은 물론이며, 심지어 일부 디자인 패턴 구현에 대한 고려사항조차 없애버린다. 예를 들어, Smalltalk 에서는 기본 Collection 클래스가 자체 반복<sup>iteration</sup> 메서드를 제공하기 때문에. 내부 반복자<sup>Iterator</sup>를 설계하거나 구현할 필요가 없다. Composite 를 포함한 다른 패턴들도 광범위한 기본 클래스 라이브러리의 기능성을 재사용하기 때문에 혜택을 받을 수 있다. '모든 것은 객체다'라는 주제로 다시 돌아가보면, 숫자, 문자열, Collection과 같은 추상 데이터 타입은 언어 자체에 내장된 블랙박스 데이터 타입이 아니라, 기본 클래스 라이브러리에서 사용자가 수정이 가능한 클래스로서 구현된다. 즉 이러한 클래스 내에서 사용자만의 메서드를 정의함으로써 객체의 기능성을 향상시킬 수 있음을 의미한다. 예를 들어, 새로운 타입의 반복자가 필요하다면 새로운 Iterator 클래스를 정의하기보다는 Collection 에서 새로운 메서드를 작성할 수 있다.<br />
<br />
<br />
<br />
'''강한 타이핑 대 약한 타이핑'''<br />
<br />
C++ 는 강한 타이핑 유형의 언어이다; 모든 변수는 컴파일러에 선언되며, 특정 타입이나 특정 클래스에 속한다. Smalltalk 는 더 넓은 범위의 동적 (또는 지연) 바인딩의 형태를 이용한다. 변수는 특정 클래스의 것으로 선언되지 않으며, runtimeㅡ객체가 실제로 인스턴스화되어 변수에 의해 참조될 때ㅡ으로 시작되기 전까지는 특정 타입(클래스)와 관련되지 않는다. <br />
<br />
두 언어 모두, 어떤 단일 변수도 서로 다른 시점에서 서로 다른 클래스의 인스턴스를 가리킬 수 있다. 그러나 Smalltalk 에서는 전체 계층구조 안의 어떠한 클래스라도 그에 해당되는 인스턴스가 될 수 있지만, C++ 에서는 특정 기반 클래스 또는 그 클래스에서 비롯된 하위클래스의 인스턴스가 된다. Collection 내의 객체에서도 마찬가지다. C++ 의 경우 목록의 모든 객체는 특정 타입이나 클래스에(또는 그 하위클래스) 속해야 하는 반면, Smalltalk 의 경우에는 일반적으로 Collection 의 어떤 클래스라도 여러 종류의 인스턴스를 포함할 수 있다. 이러한 특징은 많은 패턴에서 중요하다. 예를 들어, Smalltalk 에서 Iterator 는 내부가 다형적이기 때문에 더 강력하며, element type 마다 서로 다른 유형의 Iterator 를 정의할 필요가 없다. 강한 타이핑과 약한 타이핑은 Composite, Command, Adapter 패턴에서도 비슷한 역할을 한다. 마지막 예로서, C++ 의 Adapter 는, 해당 Adaptee 의 유형을 선언해야 하기 때문에, 선언된 클래스 또는 그 하위클래스의 객체만 조정해야 하지만, Smalltalk 에서는 Adapter 가 Adaptee 로 보낸 메시지를 포함하는 인터페이스를 가진 어떤 클래스에도 Adaptee 는 소속될 수 있다.<br />
<br />
약한 타이핑 언어는 유연성이 크지만 강한 타이핑 언어의 이점은 이보다 훨씬 더 많다. 예를 들어, 약한 타이핑이란 타입 안전성이 떨어짐을 의미한다. 게다가 변수가 구체적인 클래스에 속하도록 선언하는 경우, 프로그램의 포괄적인 정적 분석과 컴파일 시간의 최적 활용성을 제공한다.<br />
<br />
<br />
<br />
'''블록(Block)'''<br />
<br />
블록<sup>block</sup>은 자신이 코드를 실행하라는 메시지를 수신하기 전까지는 실행되지 않는, Smalltalk 코드를 포함하는 객체이다. 즉, 코드를 포함한 블록은, 일반적으로 언어 명령문(statement)처럼 한 가지씩의 방법이 연속적으로 만나면 발생(sequential encounter)하는 것이 아니라 블록에게 명시적으로 value 값 메시지를 (또는 그것의 변형체로) 전송할 때까지 대기된다. 블록은 하나의 객체이기 때문에 코드로 생성시켜 다른 객체로 전달할 수 있으며, 따라서 블록은 다른 객체에 의해 평가<sup>evalute</sup>되도록 사용자가 코드의 일부를 밀어낼 수 있도록 해주는데, 이는 구체적인 상황이나 상태가 발생할 경우로 제한된다. 이러한 특징은 대부분의 경우 매우 유용하며 Iterator 와 같은 패턴에 사용되는 것을 확인하게될 것이다. 블록은 또한 특이한 코드를 하나의 클래스의 각 인스턴스에 연결시키는데도 효율적이다(행위를 클래스의 모든 인스턴스에 적용하는 메서드와 반대로). 이는 Adapter 패턴의 대체 가능한 어댑터에 적용된다. 심지어 블록 구조는 제어구조를 컴파일러가 정해진 조건문이나 루프 구조로 제한하기보다는 언어 내에서 우리 고유의 제어 구조를 정의하도록 해준다 (Ungar & Smith, 1987). C++를 포함해 대부분 언어에는 코드의 일부를 메시지에 응답할 수 있는 first-class 객체로 만드는 구조가 없다. <br />
<br />
<br />
<br />
'''반영(refelection)과 메타수준의 성능'''<br />
<br />
스몰토크는 스몰토크 환경 자체에 대한 정보를 얻을 수 있는 코드를 작성하도록 허용한다. 클래스와 메서드는 기본 클래스 라이브러리에 존재하므로 프로그램이 클래스 계층구조 내의 클래스 간의 관계, 최근 실행한 프로세스의 메서드, 또는 특정 클래스의 인스턴스가 이해하는 메시지를 검색하도록 허용한다. 이러한 성능은 스몰토크 프로그래머의 툴킷에서 중요한 구성요소가 된다. 이는 개발환경 자체의 반영적 도구를ㅡ클래스와 메서드 브라우저, 디버거ㅡSmalltalk 에 내장시킬 수 있게 한다. 이런 코드들은 클래스 라이브러리에 포함되어 있기 때문에 추후 필요에 따라 도구를 개량 및 재정의하거나, 프로그램의 이해를 위해 새로운 도구로 통합시킬 수도 있다(예: Carroll et al., 1990). 다시 한 번 언급하지만 개인의 기호에 따라 다르다. Smalltalk 사용자들은 Smalltalk 환경으로 새로운 프로그래밍 도구를 구축하고 통합시키는 반영적 성능의 사용에 익숙하다. <br />
<br />
예를 들어, 메타 수준의 구조인 doesNotUnderstand: 를 이용해서, 사용자는 좀 더 향상된 기능성으로 객체를 "꾸미는" 목적, 또는 하나의 객체가 다른 기계 또는 데이터베이스의 다른 위치에 존재하는 객체에 대해 Proxy 의 역할을 하기 위한 목적으로 만들어진 모든 메시지를 가로챈 뒤에, 원하는 외부 객체로 이 메시지의 전송 여부와 시기를 결정할 수 있다. 또한 메시지 선택기<sup>selector</sup>를 기호 형식으로 저장해서, 내장된 perform: 메시지(와 그 변형체)를 이용해 객체로 그 메시지를 언제라도 호출할 수도 있다. 이는 함수 포인터를 사용해서 함수를 호출하는 C++ 의 기능과 유사합니다. 다른점이라면 Smalltalk 버전에서 메시지 서명의 기호<sup>symbolic</sup> 표현을 사용할 수 있다는 것입니다. Adapter, Observer, Command 같은 패턴의 Smalltalk 구현에 사용되는 perform: 이 표시되며 선택기의 기호 버전을 사용할 수 있다는 사실이 인터프리터에서 역할을 수행하게 됩니다.<br />
<br />
<br />
<br />
'''상속 의미론(Inheritance Semantics)'''<br />
<br />
2 개의 프로그래밍 언어에서 상속의 작용 방식에는 몇 가지 차이가 있는데, 그 중에 두 가지에 대해 알아보자. 첫째, C++ 는 다중 상속을 지원하지만, Smalltalk 는 클래스에 하나의 직접 상위클래스만 허용한다. 다중 상속은 몇 가지 문제에 대해 즉각적인 해법(예: [디자인 패턴]에서 Adapter 패턴에 대한 class adapter 버전)을 제공한다. Smalltalk 의 경우, 그러한 문제들에 대한 해법을 대안으로 마련해야 한다. 다중 상속은 프로그래밍 언어에 복잡성을 더한다; 프로그래머들은 이름의 충돌을 어떻게 처리하는지, 반복된 종속을 처리하기 위해 어떠한 규약이 준비되어 있는지를 알아야 한다(예: 동일한 상위클래스에서 상속된 두 클래스로부터의 상속). 다중 상속은 복잡성과 유용성 사이의 균형으로 인해 현대 Smalltalk 환경에서는 고의적으로 빠져있다. 다중 상속의 유용성이 그 사용으로 인한 추가적 복잡성보다 크지 않다고 생각했기 때문이다(특히 프로그램의 이해적 측면에서 볼 때). <br />
<br />
둘째, C++ 의 경우 함수의 동적 바인딩(상위클래스에서 선언되었으며, 하나 또는 그 이상의 하위클래스에 오버라이드된 경우)은 함수가 상위클래스에 가상으로 선언된 경우에만 작동된다. Rumbaugh et al. (1991)은 이런 특성이 확장성 및 점진적 재사용(특수화를 위해 다른 메서드를 오버라이딩하면서 상속을 통해 상위클래스의 행위 일부를 재사용)에 장애물이 될 수 있다고 지적했다. 처음으로 클래스<ref name="역자주2">일반적인 프로그래밍에서 부모클래스가 되는 클래스</ref>를 작성한 프로그래머가 메서드를 가상으로 선언하는 유일한 이유는, 아직 정의되지 않은 하위클래스에서 해당 연산을 오버라이드할 가능성이 있을때 뿐이며, 이런 방식의 선언은 클래스에 정의된 모든 함수별로 적용된다.(Rumbaugh et al., 1991).<br />
<br />
<br />
<br />
'''대화형 개발 환경'''<br />
<br />
주요 Smalltalk 제품에서의 Smalltalk 개발은 항상 대화형 개발 환경에서 이루어진다. 여기에는 많은 의미가 함축되어 있는데, 쉬운 실험, 검사, 프로그램 이해를 위한 도구의 이용성, 재사용 가능한 클래스와 메서드 발견등이 있다.. 설계와 관련해서, 개발 환경은 각 메서드를 저장할때, 대상이 되는 메서드에 대한 증분 컴파일을 제공하기 때문에 하나의 설계로 결정되어 버리는걸 피할 수 있으며, Smalltalk 프로그래머들은 클래스에서 메서드를 수정하거나 추가하기 위해 전체 클래스의 소스를 재컴파일하거나 갖출 필요가 없다. 따라서 대체 어디에 새로운 기능이 위치해야 하는지에 관한 걱정을 덜 수 있다. 예를 들어, [디자인 패턴] 에서 Visitor 패턴을 사용하는 이유 대해, 새로운 동작의 추가로 인해 관련된 모든 클래스를 재컴파일해야 하는 경우를 소개하는데, 이런 경우라면 기능을 새로운 클래스에 위치시키고 기존 클래스의 인스턴스에 작용하도록 만드는 편이 낫다. 증분 컴파일은 주제와 관련 없는 문제를 제거함으로써 책임 중심의 설계를 허용하기 때문에, 코드를 기능적으로 또는 논리적으로 위치시킬 수 있다. 예를 들어, [디자인 패턴] 에서는, Visitor 패턴을 적용하는 이유중에 하나로서, 새로운 동작의 추가로 인해 관련된 모든 클래스를 재컴파일해야 하는 경우를 소개하는데, 이런 경우에서 C++ 는 기능을 새로운 클래스에 위치시키고 기존 클래스의 인스턴스에 작용하도록 만드는 편이 낫다. Smalltalk 에서 제공되는 증분 컴파일은 주제와 관련 없는 문제를 제거함으로써 코드를 기능적으로 또는 논리적으로 위치시킬 수 있기 때문에, 책임 중심의 설계를 허용한다.<br />
<br />
<br />
<br />
'''추상 클래스, private 메서드'''<br />
<br />
C++ 는 언어기반 특성을 포함해서 Smalltalk 에서는 지원하지 않는 바람직한 특성을 명시적으로ㅡ그리고 강제적ㅡ구현한다. 예를 들어 C++ 와 달리 Smalltalk 는 메서드의 privacy 를 선언하고 강요하는 compile-time 메커니즘을 제공하지 않는다. 프로그래머들이 메서드를 private 으로 작성할 수는 있지만 (예: comment) 외부 객체가 실제로 그 메서드를 호출하지 못하도록 막는 내장된 메커니즘은 없다. 비슷한 경우로 Smalltalk 에는 추상적이어야 하는 클래스의 인스턴스화를 막는 메커니즘이 없다. Smalltalk 에서 C++ privacy 메커니즘에 대한 런타임 대체<sup>runtime substitute</sup>를 필요로 하기 때문에 위의 문제보다 더 복잡한 해법을 필요로 하는 Singleton 과 같은 패턴에서 Smalltalk 이 어떤 역할을 하는지 살펴볼 것이다. <br />
<br />
두 언어 간에는 이 외에도 많은 차이점이 있고, 각각의 장단점이 있다. 전체적으로 보면, C++ 는 효율성을 중요시하고 프로그래머가 오류를 피하도록 고안되었으며(Rumbaugh et al., 1991) Smalltalk 는 좀 더 유연하게 사용할 수 있도록 개발되었다. 다시 말하지만, 둘 중 하나가 어떤 면에서 "더 낫다"고 하고싶은건 아니다. 오히려 중요한 부분은 이 2개의 프로그래밍 언어가 '''서로 다르다는 것'''이며, Smalltalk 프로그래머와 C++ 프로그래머는 디자인패턴을 다르게 설명할 가능성이 있다는것이다. 따라서 이 책의 목표는 디자인 패턴의 23가지 패턴에 대한 Smalltalker 의 관점을 제공하는 것이다.<br />
<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:DesignPatternSmalltalkCompanion]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=DesignPatternSmalltalkCompanion&diff=5579DesignPatternSmalltalkCompanion2018-07-24T07:34:48Z<p>Onionmixer: </p>
<hr />
<div>;The Design Patterns Smalltalk Companion<br />
<br />
원본<br><br />
http://www.amazon.com/The-Design-Patterns-Smalltalk-Companion/dp/0201184621/ref=sr_1_1?ie=UTF8&qid=1335335942&sr=8-1<br />
<br />
번역진행<br><br />
'''이화영 (Hwa Young Lee)'''<br />
<br />
검수진행<br><br />
'''smalltalk korea 커뮤니티'''<br />
<br />
----<br />
===The Design Patterns Smalltalk Companion===<br />
<br />
<br />
'''번역관련 내용'''<br />
<br />
* [[:DesignPatternSmalltalkCompanion:transdic|번역관련 기타내용]]<br />
<br />
====서문====<br />
<br />
* [[:DesignPatternSmalltalkCompanion:Head01|머리말-01]]<br />
* [[:DesignPatternSmalltalkCompanion:1.1|1.1 왜 디자인패턴인가?]]<br />
* [[:DesignPatternSmalltalkCompanion:1.2|1.2 왜 Smalltalk Companion인가?]]<br />
* [[:DesignPatternSmalltalkCompanion:1.3|1.3 C++!=Smalltalk (또는 Smalltalk ~=C++)]]<br />
* [[:DesignPatternSmalltalkCompanion:1.4|1.4 패턴에 대한 논의]]<br />
* [[:DesignPatternSmalltalkCompanion:1.5|1.5 어떤 스몰토크 방언일까?]]<br />
* [[:DesignPatternSmalltalkCompanion:1.6|1.6 스몰토크 코드 예제]]<br />
* [[:DesignPatternSmalltalkCompanion:1.7|1.7 본 책에 사용된 규약]]<br />
<br />
====아하!====<br />
<br />
* [[:DesignPatternSmalltalkCompanion:Head02|머리말-02]]<br />
* [[:DesignPatternSmalltalkCompanion:2.1|2.1 장면 1: 혼란에 빠지다]]<br />
* [[:DesignPatternSmalltalkCompanion:2.2|2.2 장면 2: 원칙은 깨어져선 안 된다.]]<br />
* [[:DesignPatternSmalltalkCompanion:2.3|2.3 장면 3: 데이터베이스 스키마와 Dream]]<br />
<br />
====생성 패턴====<br />
<br />
* [[:DesignPatternSmalltalkCompanion:Head03|머리말-03]]<br />
* [[:DesignPatternSmalltalkCompanion:AbstractFactory|Abstract Factory]]<br />
* [[:DesignPatternSmalltalkCompanion:Builder|Builder]]<br />
* [[:DesignPatternSmalltalkCompanion:FactoryMethod|Factory Method]]<br />
* [[:DesignPatternSmalltalkCompanion:Prototype|Prototype]]<br />
* [[:DesignPatternSmalltalkCompanion:Singleton|Singleton]]<br />
<br />
====구조 패턴====<br />
<br />
* [[:DesignPatternSmalltalkCompanion:Head04|머리말-04]]<br />
* [[:DesignPatternSmalltalkCompanion:Adapter|Adapter]]<br />
* [[:DesignPatternSmalltalkCompanion:Bridge|Bridge]]<br />
* [[:DesignPatternSmalltalkCompanion:Composite|Composite]]<br />
* [[:DesignPatternSmalltalkCompanion:Decorator|Decorator]]<br />
* [[:DesignPatternSmalltalkCompanion:Facade|Facade]]<br />
* [[:DesignPatternSmalltalkCompanion:Flyweight|Flyweight]]<br />
* [[:DesignPatternSmalltalkCompanion:Proxy|Proxy]]<br />
<br />
====행위 패턴====<br />
<br />
* [[:DesignPatternSmalltalkCompanion:Head05|머리말-05]]<br />
* [[:DesignPatternSmalltalkCompanion:ChainsofResponsibility|Chains of Responsibility]]<br />
* [[:DesignPatternSmalltalkCompanion:Command|Command]]<br />
* [[:DesignPatternSmalltalkCompanion:Interpreter|Interpreter]]<br />
* [[:DesignPatternSmalltalkCompanion:Iterator|Iterator]]<br />
* [[:DesignPatternSmalltalkCompanion:Mediator|Mediator]]<br />
* [[:DesignPatternSmalltalkCompanion:Memento|Memento]]<br />
* [[:DesignPatternSmalltalkCompanion:Observer|Observer]]<br />
* [[:DesignPatternSmalltalkCompanion:State|State]]<br />
* [[:DesignPatternSmalltalkCompanion:Strategy|Strategy]]<br />
* [[:DesignPatternSmalltalkCompanion:TemplateMethod|Template Method]]<br />
* [[:DesignPatternSmalltalkCompanion:Visitor|Visitor]]<br />
<br />
====결론====<br />
* [[:DesignPatternSmalltalkCompanion:Head06|머리말-06]]<br />
* [[:DesignPatternSmalltalkCompanion:PointersTothePatternsCommunity|패턴사용자를 위한 조언]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=DesignPatternSmalltalkCompanion:1.3&diff=5578DesignPatternSmalltalkCompanion:1.32018-07-24T07:34:15Z<p>Onionmixer: 검수 20180724</p>
<hr />
<div>===1.3 C++ != Smalltalk (또는 Smalltalk ~= C++)===<br />
<br />
스몰토크와 C++는 단순히 서로 다른 프로그래밍 언어가 아니다; 언어에 대한 설계와 언어로 프로그래밍하는 데에는 기본적인 차이가 있다. 둘 중 하나로 작업하는데 익숙한 설계자들은 디자인 문제와 해법을 서로 다르게 바라볼 것이다.<br />
<br />
개발자가 작업하는 언어가 문제 해법에 대해 생각하는 방식에 영향을 미칠 것이라는 주장을 자세히 살펴보자. 자연언어를 다루는 심리 언어학 영역을 가장 높은 수준에서 살펴보면, 언어와 사고는 서로 밀접하게 연결되어 있으며 서로 영향을 미친다는 사실을 재빠르게 확인할 수 있다. Benjamin Whorf (1956)는 "언어는 개인이 세상을 바라보는 방식 또는 개인이 사고하는 방식에 확실히 영향을 미친다"고 가정했다. 이러한 워프의 가설(Whorfian hypothesis)이 프로그래밍 언어에도 통용된다고 주장하는 사람들도 있음은 쉽게 짐작할 수 있다: 서로 다른 언어에 대한 다른 구문과 제어구조는 그 언어로 문제를 해결하는 방식에 영향을 미치는 것이다 (예: Curtis, 1985, 특히 6장 참조).<br />
<br />
이와 반대로 생각의 방식과 세계관도 언어에 영향을 미치는 것으로 보인다. 사실 많은 심리학 연구에서는 이것이 사실이라는 증거를 제시해왔다 (예: Anderson, 1985 참조). 따라서 Goldberg와 Kay (1977)가 주장한 바와 같이 객체지향 언어들은, 설계자들이 실세계에 대한 그들의 인식을 설명하는 모델에 더 가깝게 구축하도록 하기 때문에 "진화되었다"는 말이 사실일지도 모르겠다. 여러 저자들이 제시하였듯이 객체지향 디자인은 사람들이 문제 영역을 자연스럽게 모형화하는데 더 나은 짝을 제공하므로 절차지향식 언어의 설계보다는 좀 더 "자연적이다" (Rosson & Alpert, 1990; Cox, 1984). <br />
<br />
앞의 두 가지 관점을 모두 뒷받침하는 Soloway, Bonar, Ehrlich (1983)는 사람들이 프로그래밍 해법의 설계와 관련해 자연스럽게 생각하는 방식과 더 일치하는 인식을 제공하는 프로그래밍 언어가 보다 사용하기 수월하다고 설명하였다. 하지만 그들은 프로그래밍 언어가 개인이 선호하는 설계 전략을 변화시키며 특정 언어의 구성에 경험이 많을수록 디자인 선호도가 바뀐다는 사실도 발견했다. 결론은, 누군가가 구현하는 언어ㅡC++, Smalltalk 등으로ㅡ는 시스템과 애플리케이션에 대한 사고 및 설계 방식에 영향을 미칠것이라는 사실이다. Smalltalk 전문가들과 C++ 개발자들은 서로 다른 언어로 프로그램을 구성하는데 그치는 것이 아니라, 서로 다른 언어로 말한다. 예를 들어 Smalltalk 프로그래머에게 클래스 객체는 클래스를 나타내는 진실된 스몰토크 객체인 반면, C++ 개발자들에게 클래스 객체란 사용자가 정의한 클래스의 인스턴스이다 (즉 내장된 C-언어 데이터 타입이 아니라는 의미다). 두 언어와 환경 간에 개념적으로 중복되는 부분이 상당히 많음에도 불구하고, 서로 상당히 다른 의견을 포함하고, 다른 문제를 표면화시키며, 최종적으로 프로그램 설계에 대한 서로 다른 사고를 이끌어 낸다.<br />
<br />
반대로 설계 시점에서 목표 언어를 고려할 필요도 있다: "설계 시점에서 언어를 고려하지 않을 경우, 문제를 미해결 상태로 남길 수 있으며… 그러한 설계로 인해 형편 없는 프로그램이 되기도 한다" (Smith, 1996a). 목표 언어를 선택할 때 발생할 수 있는 제약과 기회를 고려하지 않고 설계하는 것은 실수일 것이다. 스몰토크의 경우 언어 자체뿐 아니라 클래스 라이브러리와 내장된 프레임워크도 고려해야 한다. 예를 들어, VisualWorks 의 Mode-View-Controller 프레임워크, Visual Smalltalk 의 Mode-Pane 프레임워크, IBM Smalltalk 의 Motif-style 상호작용 프레임워크를 생각해보자ㅡ각 프레임워크는 실제로 대화형 애플리케이션의 설계를 시작하기도 전에 특정 디자인의 결정이 필요로 하기도 한다.<br />
<br />
이러한 생각을 명심하고 Smalltalk 과 C++ 이 어떻게 다른지ㅡ구체적으로 말해, 두 언어가 어떻게 설계에 영향을 미치는지ㅡ간단히 살펴보도록 하자. 이를 살펴보는 목적은, 두 언어에는 수많은 기본적 차이가 있기 때문에, 개발자들이 문제에 대해 생각하고, 해법을 설계하며, 디자인 패턴을 구현하는 방식이 서로 다를 수 밖에 없다는 우리의 주장을 뒷받침하기 위함이다(어느 한 언어가 뛰어나다는 주장을 하려는 것이 아니며, 두 언어의 비교를 통해 그 중 한 가지 언어를 선호하는 팬들을 언짢게 만들었다면 미리 사과드린다). 이 과정에서 구체적 특성의 유무에 따라 영향을 받을 일부 패턴을 언급하고자 한다.<br />
<br />
<br />
<br />
'''순수한 객체지향 대 혼합 객체지향'''<br />
<br />
Smalltalk 은 "순수한" 객체지향 언어이다. Smalltalk 에서 모든 계산은ㅡ가장 원시적 수준(과 일반 프로그래밍 활동과 관련이 없는 수준)을 제외한ㅡ객체로 전송하는 메시지의 결과로만 발생된다. 모든 것은 객체이며, 여기에는 숫자, 문자, 문자열과 같은 원시 데이터 타입<sup>Primitive Data Type</sup>을 포함한다. 반면 C++ 는 복합 언어로서, 절차지향의 C언어 기반에 객체지향의 특성들이 추가된 언어이다. 언어에서 제공되는 비객체지향적 특성을 이용할 수 있기 때문에, 설계자들과 프로그래머들에게서 서로 다른 사고방식을 이끌어 낸다. 예를 들어, C++ 에서는 어떠한 클래스에도 소속되지 않은 전역함수를 가질 수 있는 반면, Smalltalk 에서는 각 기능 부분의 책임을 누가 지는지를 반영해야만 한다. 복합 언어를 이용한 접근법은, 개인이 객체지향과 절차지향 패러다임의 이점을 모두 이용하는 프로그램을 사용하도록 하며, 기존의 C 코드로의 인터페이스가 훨씬 쉽다. 반면 복합 언어가 아닌 순수한 언어는 이해가 쉽다 (Rosson & Alpert, 1990).<br />
<br />
<br />
<br />
'''객체로서의 클래스'''<br />
<br />
모든 것을 객체로 간주하는 Smalltalk 에서 Class 는 first-class 런타임 객체를 의미한다. 이 사실은 메시지 송수신이 가능하며, 일반적으로 어떤 연산이라도 포함<sup>participating</sup>시킬 수 있음을 의미<ref name="역자주1">Smalltalk 에서 모든 클래스는 객체의 unique instance 라는 사실</ref>한다. C++ 에서는 이런것이 불가능하다. Smalltalk 에서의 인스턴스 생성은 클래스 객체가 수행하는 업무 중의 하나지만, C++ 에는 언어 자체에 내장되어 있다. 따라서 Smalltalk 에서는 인스턴스의 생성과 동작간의 차이가 덜 명확하다: 가장 기본 형태에서 인스턴스 생성은 특수 동작(specialization of behavior)에 불과하지만, C++ 에서는 엄밀히 별도 구분된다. 대부분의 패턴이, Smalltalk 안에서 완전한 패턴(Abstract Factory, Singleton, Factory Method 패턴 등)의 형태를 한 객체를 클래스로 포함하고 있다는 것을 알 수 있다.<br />
<br />
<br />
<br />
'''성숙하고 포괄적인 클래스 라이브러리'''<br />
<br />
주요 Smalltalk 환경의 이점 중 하나로서, 수 년 간 다듬고 디버깅해온 거대한 기본 클래스<sup>base class</sup> 세트가 있다. 라이브러리의 광범위한 사용 결과, 라이브러리에 포함된 저수준의 추상 데이터 타입마저 시간이 지나면서 향상된 덕분에 광범위한 성능을 가지게 되었다. 이렇게 광범위한 기능성은 특정 설계 고려사항은 물론이며, 심지어 일부 디자인 패턴 구현에 대한 고려사항조차 없애버린다. 예를 들어, Smalltalk 에서는 기본 Collection 클래스가 자체 반복<sup>iteration</sup> 메서드를 제공하기 때문에. 내부 반복자<sup>Iterator</sup>를 설계하거나 구현할 필요가 없다. Composite 를 포함한 다른 패턴들도 광범위한 기본 클래스 라이브러리의 기능성을 재사용하기 때문에 혜택을 받을 수 있다. '모든 것은 객체다'라는 주제로 다시 돌아가보면, 숫자, 문자열, Collection과 같은 추상 데이터 타입은 언어 자체에 내장된 블랙박스 데이터 타입이 아니라, 기본 클래스 라이브러리에서 사용자가 수정이 가능한 클래스로서 구현된다. 즉 이러한 클래스 내에서 사용자만의 메서드를 정의함으로써 객체의 기능성을 향상시킬 수 있음을 의미한다. 예를 들어, 새로운 타입의 반복자가 필요하다면 새로운 Iterator 클래스를 정의하기보다는 Collection 에서 새로운 메서드를 작성할 수 있다.<br />
<br />
<br />
<br />
'''강한 타이핑 대 약한 타이핑'''<br />
<br />
C++ 는 강한 타이핑 유형의 언어이다; 모든 변수는 컴파일러에 선언되며, 특정 타입이나 특정 클래스에 속한다. Smalltalk 는 더 넓은 범위의 동적 (또는 지연) 바인딩의 형태를 이용한다. 변수는 특정 클래스의 것으로 선언되지 않으며, runtimeㅡ객체가 실제로 인스턴스화되어 변수에 의해 참조될 때ㅡ으로 시작되기 전까지는 특정 타입(클래스)와 관련되지 않는다. <br />
<br />
두 언어 모두, 어떤 단일 변수도 서로 다른 시점에서 서로 다른 클래스의 인스턴스를 가리킬 수 있다. 그러나 Smalltalk 에서는 전체 계층구조 안의 어떠한 클래스라도 그에 해당되는 인스턴스가 될 수 있지만, C++ 에서는 특정 기반 클래스 또는 그 클래스에서 비롯된 하위클래스의 인스턴스가 된다. Collection 내의 객체에서도 마찬가지다. C++ 의 경우 목록의 모든 객체는 특정 타입이나 클래스에(또는 그 하위클래스) 속해야 하는 반면, Smalltalk 의 경우에는 일반적으로 Collection 의 어떤 클래스라도 여러 종류의 인스턴스를 포함할 수 있다. 이러한 특징은 많은 패턴에서 중요하다. 예를 들어, Smalltalk 에서 Iterator 는 내부가 다형적이기 때문에 더 강력하며, element type 마다 서로 다른 유형의 Iterator 를 정의할 필요가 없다. 강한 타이핑과 약한 타이핑은 Composite, Command, Adapter 패턴에서도 비슷한 역할을 한다. 마지막 예로서, C++ 의 Adapter 는, 해당 Adaptee 의 유형을 선언해야 하기 때문에, 선언된 클래스 또는 그 하위클래스의 객체만 조정해야 하지만, Smalltalk 에서는 Adapter 가 Adaptee 로 보낸 메시지를 포함하는 인터페이스를 가진 어떤 클래스에도 Adaptee 는 소속될 수 있다.<br />
<br />
약한 타이핑 언어는 유연성이 크지만 강한 타이핑 언어의 이점은 이보다 훨씬 더 많다. 예를 들어, 약한 타이핑이란 타입 안전성이 떨어짐을 의미한다. 게다가 변수가 구체적인 클래스에 속하도록 선언하는 경우, 프로그램의 포괄적인 정적 분석과 컴파일 시간의 최적 활용성을 제공한다.<br />
<br />
<br />
<br />
'''블록(Block)'''<br />
<br />
블록<sup>block</sup>은 자신이 코드를 실행하라는 메시지를 수신하기 전까지는 실행되지 않는, Smalltalk 코드를 포함하는 객체이다. 즉, 코드를 포함한 블록은, 일반적으로 언어 명령문(statement)처럼 한 가지씩의 방법이 연속적으로 만나면 발생(sequential encounter)하는 것이 아니라 블록에게 명시적으로 value 값 메시지를 (또는 그것의 변형체로) 전송할 때까지 대기된다. 블록은 하나의 객체이기 때문에 코드로 생성시켜 다른 객체로 전달할 수 있으며, 따라서 블록은 다른 객체에 의해 평가<sup>evalute</sup>되도록 사용자가 코드의 일부를 밀어낼 수 있도록 해주는데, 이는 구체적인 상황이나 상태가 발생할 경우로 제한된다. 이러한 특징은 대부분의 경우 매우 유용하며 Iterator 와 같은 패턴에 사용되는 것을 확인하게될 것이다. 블록은 또한 특이한 코드를 하나의 클래스의 각 인스턴스에 연결시키는데도 효율적이다(행위를 클래스의 모든 인스턴스에 적용하는 메서드와 반대로). 이는 Adapter 패턴의 대체 가능한 어댑터에 적용된다. 심지어 블록 구조는 제어구조를 컴파일러가 정해진 조건문이나 루프 구조로 제한하기보다는 언어 내에서 우리 고유의 제어 구조를 정의하도록 해준다 (Ungar & Smith, 1987). C++를 포함해 대부분 언어에는 코드의 일부를 메시지에 응답할 수 있는 first-class 객체로 만드는 구조가 없다. <br />
<br />
<br />
<br />
'''반영(refelection)과 메타수준의 성능'''<br />
<br />
스몰토크는 스몰토크 환경 자체에 대한 정보를 얻을 수 있는 코드를 작성하도록 허용한다. 클래스와 메서드는 기본 클래스 라이브러리에 존재하므로 프로그램이 클래스 계층구조 내의 클래스 간의 관계, 최근 실행한 프로세스의 메서드, 또는 특정 클래스의 인스턴스가 이해하는 메시지를 검색하도록 허용한다. 이러한 성능은 스몰토크 프로그래머의 툴킷에서 중요한 구성요소가 된다. 이는 개발환경 자체의 반영적 도구를ㅡ클래스와 메서드 브라우저, 디버거ㅡSmalltalk 에 내장시킬 수 있게 한다. 이런 코드들은 클래스 라이브러리에 포함되어 있기 때문에 추후 필요에 따라 도구를 개량 및 재정의하거나, 프로그램의 이해를 위해 새로운 도구로 통합시킬 수도 있다(예: Carroll et al., 1990). 다시 한 번 언급하지만 개인의 기호에 따라 다르다. Smalltalk 사용자들은 Smalltalk 환경으로 새로운 프로그래밍 도구를 구축하고 통합시키는 반영적 성능의 사용에 익숙하다. <br />
<br />
예를 들어, 메타 수준의 구조인 doesNotUnderstand: 를 이용해서, 사용자는 좀 더 향상된 기능성으로 객체를 "꾸미는" 목적, 또는 하나의 객체가 다른 기계 또는 데이터베이스의 다른 위치에 존재하는 객체에 대해 Proxy 의 역할을 하기 위한 목적으로 만들어진 모든 메시지를 가로챈 뒤에, 원하는 외부 객체로 이 메시지의 전송 여부와 시기를 결정할 수 있다. 또한 메시지 선택기<sup>selector</sup>를 기호 형식으로 저장해서, 내장된 perform: 메시지(와 그 변형체)를 이용해 객체로 그 메시지를 언제라도 호출할 수도 있다. 이는 함수 포인터를 사용해서 함수를 호출하는 C++ 의 기능과 유사합니다. 다른점이라면 Smalltalk 버전에서 메시지 서명의 기호<sup>symbolic</sup> 표현을 사용할 수 있다는 것입니다. Adapter, Observer, Command 같은 패턴의 Smalltalk 구현에 사용되는 perform: 이 표시되며 선택기의 기호 버전을 사용할 수 있다는 사실이 인터프리터에서 역할을 수행하게 됩니다.<br />
<br />
<br />
<br />
'''상속 의미론(Inheritance Semantics)'''<br />
<br />
2 개의 프로그래밍 언어에서 상속의 작용 방식에는 몇 가지 차이가 있는데, 그 중에 두 가지에 대해 알아보자. 첫째, C++ 는 다중 상속을 지원하지만, Smalltalk 는 클래스에 하나의 직접 상위클래스만 허용한다. 다중 상속은 몇 가지 문제에 대해 즉각적인 해법(예: [디자인 패턴]에서 Adapter 패턴에 대한 class adapter 버전)을 제공한다. Smalltalk 의 경우, 그러한 문제들에 대한 해법을 대안으로 마련해야 한다. 다중 상속은 프로그래밍 언어에 복잡성을 더한다; 프로그래머들은 이름의 충돌을 어떻게 처리하는지, 반복된 종속을 처리하기 위해 어떠한 규약이 준비되어 있는지를 알아야 한다(예: 동일한 상위클래스에서 상속된 두 클래스로부터의 상속). 다중 상속은 복잡성과 유용성 사이의 균형으로 인해 현대 Smalltalk 환경에서는 고의적으로 빠져있다. 다중 상속의 유용성이 그 사용으로 인한 추가적 복잡성보다 크지 않다고 생각했기 때문이다(특히 프로그램의 이해적 측면에서 볼 때). <br />
<br />
둘째, C++ 의 경우 함수의 동적 바인딩(상위클래스에서 선언되었으며, 하나 또는 그 이상의 하위클래스에 오버라이드된 경우)은 함수가 상위클래스에 가상으로 선언된 경우에만 작동된다. Rumbaugh et al. (1991)은 이런 특성이 확장성 및 점진적 재사용(특수화를 위해 다른 메서드를 오버라이딩하면서 상속을 통해 상위클래스의 행위 일부를 재사용)에 장애물이 될 수 있다고 지적했다. 처음으로 클래스<ref name="역자주2">일반적인 프로그래밍에서 부모클래스가 되는 클래스</ref>를 작성한 프로그래머가 메서드를 가상으로 선언하는 유일한 이유는, 아직 정의되지 않은 하위클래스에서 해당 연산을 오버라이드할 가능성이 있을때 뿐이며, 이런 방식의 선언은 클래스에 정의된 모든 함수마다 적용되어야 한다(Rumbaugh et al., 1991).<br />
<br />
<br />
<br />
'''대화형 개발 환경'''<br />
<br />
주요 Smalltalk 제품에서의 Smalltalk 개발은 항상 대화형 개발 환경에서 이루어진다. 여기에는 많은 의미가 함축되어 있는데, 쉬운 실험, 검사, 프로그램 이해를 위한 도구의 이용성, 재사용 가능한 클래스와 메서드 발견등이 있다.. 설계와 관련해서, 개발 환경은 각 메서드를 저장할때, 대상이 되는 메서드에 대한 증분 컴파일을 제공하기 때문에 하나의 설계로 결정되어 버리는걸 피할 수 있으며, Smalltalk 프로그래머들은 클래스에서 메서드를 수정하거나 추가하기 위해 전체 클래스의 소스를 재컴파일하거나 갖출 필요가 없다. 따라서 대체 어디에 새로운 기능이 위치해야 하는지에 관한 걱정을 덜 수 있다. 예를 들어, [디자인 패턴] 에서 Visitor 패턴을 사용하는 이유 대해, 새로운 동작의 추가로 인해 관련된 모든 클래스를 재컴파일해야 하는 경우를 소개하는데, 이런 경우라면 기능을 새로운 클래스에 위치시키고 기존 클래스의 인스턴스에 작용하도록 만드는 편이 낫다. 증분 컴파일은 주제와 관련 없는 문제를 제거함으로써 책임 중심의 설계를 허용하기 때문에, 코드를 기능적으로 또는 논리적으로 위치시킬 수 있다. 예를 들어, [디자인 패턴] 에서는, Visitor 패턴을 적용하는 이유중에 하나로서, 새로운 동작의 추가로 인해 관련된 모든 클래스를 재컴파일해야 하는 경우를 소개하는데, 이런 경우에서 C++ 는 기능을 새로운 클래스에 위치시키고 기존 클래스의 인스턴스에 작용하도록 만드는 편이 낫다. Smalltalk 에서 제공되는 증분 컴파일은 주제와 관련 없는 문제를 제거함으로써 코드를 기능적으로 또는 논리적으로 위치시킬 수 있기 때문에, 책임 중심의 설계를 허용한다.<br />
<br />
<br />
<br />
'''추상 클래스, private 메서드'''<br />
<br />
C++ 는 언어기반 특성을 포함해서 Smalltalk 에서는 지원하지 않는 바람직한 특성을 명시적으로ㅡ그리고 강제적ㅡ구현한다. 예를 들어 C++ 와 달리 Smalltalk 는 메서드의 privacy 를 선언하고 강요하는 compile-time 메커니즘을 제공하지 않는다. 프로그래머들이 메서드를 private 으로 작성할 수는 있지만 (예: comment) 외부 객체가 실제로 그 메서드를 호출하지 못하도록 막는 내장된 메커니즘은 없다. 비슷한 경우로 Smalltalk 에는 추상적이어야 하는 클래스의 인스턴스화를 막는 메커니즘이 없다. Smalltalk 에서 C++ privacy 메커니즘에 대한 런타임 대체<sup>runtime substitute</sup>를 필요로 하기 때문에 위의 문제보다 더 복잡한 해법을 필요로 하는 Singleton 과 같은 패턴에서 Smalltalk 이 어떤 역할을 하는지 살펴볼 것이다. <br />
<br />
두 언어 간에는 이 외에도 많은 차이점이 있고, 각각의 장단점이 있다. 전체적으로 보면, C++ 는 효율성을 중요시하고 프로그래머가 오류를 피하도록 고안되었으며(Rumbaugh et al., 1991) Smalltalk 는 좀 더 유연하게 사용할 수 있도록 개발되었다. 다시 말하지만, 둘 중 하나가 어떤 면에서 "더 낫다"고 하고싶은건 아니다. 오히려 중요한 부분은 이 2개의 프로그래밍 언어가 '''서로 다르다는 것'''이며, Smalltalk 프로그래머와 C++ 프로그래머는 디자인패턴을 다르게 설명할 가능성이 있다는것이다. 따라서 이 책의 목표는 디자인 패턴의 23가지 패턴에 대한 Smalltalker 의 관점을 제공하는 것이다.<br />
<br />
<br />
<br />
==Notes==<br />
<references /><br />
<br />
[[Category:DesignPatternSmalltalkCompanion]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=DesignPatternSmalltalkCompanion:1.3&diff=5577DesignPatternSmalltalkCompanion:1.32018-07-19T14:08:34Z<p>Onionmixer: 검수 20180719 진행분</p>
<hr />
<div>===1.3 C++ != Smalltalk (또는 Smalltalk ~= C++)===<br />
<br />
스몰토크와 C++는 단순히 서로 다른 프로그래밍 언어가 아니다; 언어에 대한 설계와 언어로 프로그래밍하는 데에는 기본적인 차이가 있다. 둘 중 하나로 작업하는데 익숙한 설계자들은 디자인 문제와 해법을 서로 다르게 바라볼 것이다.<br />
<br />
개발자가 작업하는 언어가 문제 해법에 대해 생각하는 방식에 영향을 미칠 것이라는 주장을 자세히 살펴보자. 자연언어를 다루는 심리 언어학 영역을 가장 높은 수준에서 살펴보면, 언어와 사고는 서로 밀접하게 연결되어 있으며 서로 영향을 미친다는 사실을 재빠르게 확인할 수 있다. Benjamin Whorf (1956)는 "언어는 개인이 세상을 바라보는 방식 또는 개인이 사고하는 방식에 확실히 영향을 미친다"고 가정했다. 이러한 워프의 가설(Whorfian hypothesis)이 프로그래밍 언어에도 통용된다고 주장하는 사람들도 있음은 쉽게 짐작할 수 있다: 서로 다른 언어에 대한 다른 구문과 제어구조는 그 언어로 문제를 해결하는 방식에 영향을 미치는 것이다 (예: Curtis, 1985, 특히 6장 참조).<br />
<br />
이와 반대로 생각의 방식과 세계관도 언어에 영향을 미치는 것으로 보인다. 사실 많은 심리학 연구에서는 이것이 사실이라는 증거를 제시해왔다 (예: Anderson, 1985 참조). 따라서 Goldberg와 Kay (1977)가 주장한 바와 같이 객체지향 언어들은, 설계자들이 실세계에 대한 그들의 인식을 설명하는 모델에 더 가깝게 구축하도록 하기 때문에 "진화되었다"는 말이 사실일지도 모르겠다. 여러 저자들이 제시하였듯이 객체지향 디자인은 사람들이 문제 영역을 자연스럽게 모형화하는데 더 나은 짝을 제공하므로 절차지향식 언어의 설계보다는 좀 더 "자연적이다" (Rosson & Alpert, 1990; Cox, 1984). <br />
<br />
앞의 두 가지 관점을 모두 뒷받침하는 Soloway, Bonar, Ehrlich (1983)는 사람들이 프로그래밍 해법의 설계와 관련해 자연스럽게 생각하는 방식과 더 일치하는 인식을 제공하는 프로그래밍 언어가 보다 사용하기 수월하다고 설명하였다. 하지만 그들은 프로그래밍 언어가 개인이 선호하는 설계 전략을 변화시키며 특정 언어의 구성에 경험이 많을수록 디자인 선호도가 바뀐다는 사실도 발견했다. 결론은, 누군가가 구현하는 언어ㅡC++, Smalltalk 등으로ㅡ는 시스템과 애플리케이션에 대한 사고 및 설계 방식에 영향을 미칠것이라는 사실이다. Smalltalk 전문가들과 C++ 개발자들은 서로 다른 언어로 프로그램을 구성하는데 그치는 것이 아니라, 서로 다른 언어로 말한다. 예를 들어 Smalltalk 프로그래머에게 클래스 객체는 클래스를 나타내는 진실된 스몰토크 객체인 반면, C++ 개발자들에게 클래스 객체란 사용자가 정의한 클래스의 인스턴스이다 (즉 내장된 C-언어 데이터 타입이 아니라는 의미다). 두 언어와 환경 간에 개념적으로 중복되는 부분이 상당히 많음에도 불구하고, 서로 상당히 다른 의견을 포함하고, 다른 문제를 표면화시키며, 최종적으로 프로그램 설계에 대한 서로 다른 사고를 이끌어 낸다.<br />
<br />
반대로 설계 시점에서 목표 언어를 고려할 필요도 있다: "설계 시점에서 언어를 고려하지 않을 경우, 문제를 미해결 상태로 남길 수 있으며… 그러한 설계로 인해 형편 없는 프로그램이 되기도 한다" (Smith, 1996a). 목표 언어를 선택할 때 발생할 수 있는 제약과 기회를 고려하지 않고 설계하는 것은 실수일 것이다. 스몰토크의 경우 언어 자체뿐 아니라 클래스 라이브러리와 내장된 프레임워크도 고려해야 한다. 예를 들어, VisualWorks 의 Mode-View-Controller 프레임워크, Visual Smalltalk 의 Mode-Pane 프레임워크, IBM Smalltalk 의 Motif-style 상호작용 프레임워크를 생각해보자ㅡ각 프레임워크는 실제로 대화형 애플리케이션의 설계를 시작하기도 전에 특정 디자인의 결정이 필요로 하기도 한다.<br />
<br />
이러한 생각을 명심하고 Smalltalk 과 C++ 이 어떻게 다른지ㅡ구체적으로 말해, 두 언어가 어떻게 설계에 영향을 미치는지ㅡ간단히 살펴보도록 하자. 이를 살펴보는 목적은, 두 언어에는 수많은 기본적 차이가 있기 때문에, 개발자들이 문제에 대해 생각하고, 해법을 설계하며, 디자인 패턴을 구현하는 방식이 서로 다를 수 밖에 없다는 우리의 주장을 뒷받침하기 위함이다(어느 한 언어가 뛰어나다는 주장을 하려는 것이 아니며, 두 언어의 비교를 통해 그 중 한 가지 언어를 선호하는 팬들을 언짢게 만들었다면 미리 사과드린다). 이 과정에서 구체적 특성의 유무에 따라 영향을 받을 일부 패턴을 언급하고자 한다.<br />
<br />
<br />
<br />
'''순수한 객체지향 대 혼합 객체지향'''<br />
<br />
Smalltalk 은 "순수한" 객체지향 언어이다. Smalltalk 에서 모든 계산은ㅡ가장 원시적 수준(과 일반 프로그래밍 활동과 관련이 없는 수준)을 제외한ㅡ객체로 전송하는 메시지의 결과로만 발생된다. 모든 것은 객체이며, 여기에는 숫자, 문자, 문자열과 같은 원시 데이터 타입<sup>Primitive Data Type</sup>을 포함한다. 반면 C++ 는 복합 언어로서, 절차지향의 C언어 기반에 객체지향의 특성들이 추가된 언어이다. 언어에서 제공되는 비객체지향적 특성을 이용할 수 있기 때문에, 설계자들과 프로그래머들에게서 서로 다른 사고방식을 이끌어 낸다. 예를 들어, C++ 에서는 어떠한 클래스에도 소속되지 않은 전역함수를 가질 수 있는 반면, Smalltalk 에서는 각 기능 부분의 책임을 누가 지는지를 반영해야만 한다. 복합 언어를 이용한 접근법은, 개인이 객체지향과 절차지향 패러다임의 이점을 모두 이용하는 프로그램을 사용하도록 하며, 기존의 C 코드로의 인터페이스가 훨씬 쉽다. 반면 복합 언어가 아닌 순수한 언어는 이해가 쉽다 (Rosson & Alpert, 1990).<br />
<br />
<br />
<br />
'''객체로서의 클래스'''<br />
<br />
모든 것을 객체로 간주하는 Smalltalk 에서 Class 는 first-class 런타임 객체를 의미한다. 이 사실은 메시지 송수신이 가능하며, 일반적으로 어떤 연산이라도 포함<sup>participating</sup>시킬 수 있음을 의미<ref name="역자주1">Smalltalk 에서 모든 클래스는 객체의 unique instance 라는 사실</ref>한다. C++ 에서는 이런것이 불가능하다. Smalltalk 에서의 인스턴스 생성은 클래스 객체가 수행하는 업무 중의 하나지만, C++ 에는 언어 자체에 내장되어 있다. 따라서 Smalltalk 에서는 인스턴스의 생성과 동작간의 차이가 덜 명확하다: 가장 기본 형태에서 인스턴스 생성은 특수 동작(specialization of behavior)에 불과하지만, C++ 에서는 엄밀히 별도 구분된다. 대부분의 패턴이, Smalltalk 안에서 완전한 패턴(Abstract Factory, Singleton, Factory Method 패턴 등)의 형태를 한 객체를 클래스로 포함하고 있다는 것을 알 수 있다.<br />
<br />
<br />
<br />
'''성숙하고 포괄적인 클래스 라이브러리'''<br />
<br />
주요 Smalltalk 환경의 이점 중 하나로서, 수 년 간 다듬고 디버깅해온 거대한 기본 클래스<sup>base class</sup> 세트가 있다. 라이브러리의 광범위한 사용 결과, 라이브러리에 포함된 저수준의 추상 데이터 타입마저 시간이 지나면서 향상된 덕분에 광범위한 성능을 가지게 되었다. 이렇게 광범위한 기능성은 특정 설계 고려사항은 물론이며, 심지어 일부 디자인 패턴 구현에 대한 고려사항조차 없애버린다. 예를 들어, Smalltalk 에서는 기본 Collection 클래스가 자체 반복<sup>iteration</sup> 메서드를 제공하기 때문에. 내부 반복자<sup>Iterator</sup>를 설계하거나 구현할 필요가 없다. Composite 를 포함한 다른 패턴들도 광범위한 기본 클래스 라이브러리의 기능성을 재사용하기 때문에 혜택을 받을 수 있다. '모든 것은 객체다'라는 주제로 다시 돌아가보면, 숫자, 문자열, Collection과 같은 추상 데이터 타입은 언어 자체에 내장된 블랙박스 데이터 타입이 아니라, 기본 클래스 라이브러리에서 사용자가 수정이 가능한 클래스로서 구현된다. 즉 이러한 클래스 내에서 사용자만의 메서드를 정의함으로써 객체의 기능성을 향상시킬 수 있음을 의미한다. 예를 들어, 새로운 타입의 반복자가 필요하다면 새로운 Iterator 클래스를 정의하기보다는 Collection 에서 새로운 메서드를 작성할 수 있다.<br />
<br />
<br />
<br />
'''강한 타이핑 대 약한 타이핑'''<br />
<br />
C++ 는 강한 타이핑 유형의 언어이다; 모든 변수는 컴파일러에 선언되며, 특정 타입이나 특정 클래스에 속한다. Smalltalk 는 더 넓은 범위의 동적 (또는 지연) 바인딩의 형태를 이용한다. 변수는 특정 클래스의 것으로 선언되지 않으며, runtimeㅡ객체가 실제로 인스턴스화되어 변수에 의해 참조될 때ㅡ으로 시작되기 전까지는 특정 타입(클래스)와 관련되지 않는다. <br />
<br />
두 언어 모두, 어떤 단일 변수도 서로 다른 시점에서 서로 다른 클래스의 인스턴스를 가리킬 수 있다. 그러나 Smalltalk 에서는 전체 계층구조 안의 어떠한 클래스라도 그에 해당되는 인스턴스가 될 수 있지만, C++ 에서는 특정 기반 클래스 또는 그 클래스에서 비롯된 하위클래스의 인스턴스가 된다. Collection 내의 객체에서도 마찬가지다. C++ 의 경우 목록의 모든 객체는 특정 타입이나 클래스에(또는 그 하위클래스) 속해야 하는 반면, Smalltalk 의 경우에는 일반적으로 Collection 의 어떤 클래스라도 여러 종류의 인스턴스를 포함할 수 있다. 이러한 특징은 많은 패턴에서 중요하다. 예를 들어, Smalltalk 에서 Iterator 는 내부가 다형적이기 때문에 더 강력하며, element type 마다 서로 다른 유형의 Iterator 를 정의할 필요가 없다. 강한 타이핑과 약한 타이핑은 Composite, Command, Adapter 패턴에서도 비슷한 역할을 한다. 마지막 예로서, C++ 의 Adapter 는, 해당 Adaptee 의 유형을 선언해야 하기 때문에, 선언된 클래스 또는 그 하위클래스의 객체만 조정해야 하지만, Smalltalk 에서는 Adapter 가 Adaptee 로 보낸 메시지를 포함하는 인터페이스를 가진 어떤 클래스에도 Adaptee 는 소속될 수 있다.<br />
<br />
약한 타이핑 언어는 유연성이 크지만 강한 타이핑 언어의 이점은 이보다 훨씬 더 많다. 예를 들어, 약한 타이핑이란 타입 안전성이 떨어짐을 의미한다. 게다가 변수가 구체적인 클래스에 속하도록 선언하는 경우, 프로그램의 포괄적인 정적 분석과 컴파일 시간의 최적 활용성을 제공한다.<br />
<br />
<br />
<br />
'''블록(Block)'''<br />
<br />
블록<sup>block</sup>은 자신이 코드를 실행하라는 메시지를 수신하기 전까지는 실행되지 않는, Smalltalk 코드를 포함하는 객체이다. 즉, 코드를 포함한 블록은, 일반적으로 언어 명령문(statement)처럼 한 가지씩의 방법이 연속적으로 만나면 발생(sequential encounter)하는 것이 아니라 블록에게 명시적으로 value 값 메시지를 (또는 그것의 변형체로) 전송할 때까지 대기된다. 블록은 하나의 객체이기 때문에 코드로 생성시켜 다른 객체로 전달할 수 있으며, 따라서 블록은 다른 객체에 의해 평가<sup>evalute</sup>되도록 사용자가 코드의 일부를 밀어낼 수 있도록 해주는데, 이는 구체적인 상황이나 상태가 발생할 경우로 제한된다. 이러한 특징은 대부분의 경우 매우 유용하며 Iterator 와 같은 패턴에 사용되는 것을 확인하게될 것이다. 블록은 또한 특이한 코드를 하나의 클래스의 각 인스턴스에 연결시키는데도 효율적이다(행위를 클래스의 모든 인스턴스에 적용하는 메서드와 반대로). 이는 Adapter 패턴의 대체 가능한 어댑터에 적용된다. 심지어 블록 구조는 제어구조를 컴파일러가 정해진 조건문이나 루프 구조로 제한하기보다는 언어 내에서 우리 고유의 제어 구조를 정의하도록 해준다 (Ungar & Smith, 1987). C++를 포함해 대부분 언어에는 코드의 일부를 메시지에 응답할 수 있는 first-class 객체로 만드는 구조가 없다. <br />
<br />
<br />
<br />
'''반영(refelection)과 메타수준의 성능'''<br />
<br />
스몰토크는 스몰토크 환경 자체에 대한 정보를 얻을 수 있는 코드를 작성하도록 허용한다. 클래스와 메서드는 기본 클래스 라이브러리에 존재하므로 프로그램이 클래스 계층구조 내의 클래스 간의 관계, 최근 실행한 프로세스의 메서드, 또는 특정 클래스의 인스턴스가 이해하는 메시지를 검색하도록 허용한다. 이러한 성능은 스몰토크 프로그래머의 툴킷에서 중요한 구성요소가 된다. 이는 개발환경 자체의 반영적 도구를ㅡ클래스와 메서드 브라우저, 디버거ㅡSmalltalk 에 내장시킬 수 있게 한다. 이런 코드들은 클래스 라이브러리에 포함되어 있기 때문에 추후 필요에 따라 도구를 개량 및 재정의하거나, 프로그램의 이해를 위해 새로운 도구로 통합시킬 수도 있다(예: Carroll et al., 1990). 다시 한 번 언급하지만 개인의 기호에 따라 다르다. Smalltalk 사용자들은 Smalltalk 환경으로 새로운 프로그래밍 도구를 구축하고 통합시키는 반영적 성능의 사용에 익숙하다. <br />
<br />
예를 들어, 메타 수준의 구조인 doesNotUnderstand: 를 이용해서, 사용자는 좀 더 향상된 기능성으로 객체를 "꾸미는" 목적, 또는 하나의 객체가 다른 기계 또는 데이터베이스의 다른 위치에 존재하는 객체에 대해 Proxy 의 역할을 하기 위한 목적으로 만들어진 모든 메시지를 가로챈 뒤에, 원하는 외부 객체로 이 메시지의 전송 여부와 시기를 결정할 수 있다. 또한 메시지 선택기<sup>selector</sup>를 기호 형식으로 저장해서, 내장된 perform: 메시지(와 그 변형체)를 이용해 객체로 그 메시지를 언제라도 호출할 수도 있다. 이는 함수 포인터를 사용해서 함수를 호출하는 C++ 의 기능과 유사합니다. 다른점이라면 Smalltalk 버전에서 메시지 서명의 기호<sup>symbolic</sup> 표현을 사용할 수 있다는 것입니다. Adapter, Observer, Command 같은 패턴의 Smalltalk 구현에 사용되는 perform: 이 표시되며 선택기의 기호 버전을 사용할 수 있다는 사실이 인터프리터에서 역할을 수행하게 됩니다.<br />
<br />
<br />
<br />
'''상속 의미론(Inheritance Semantics)'''<br />
<br />
두 언어에서 상속의 작용 방식에는 몇 가지 차이가 있으며, 그중에 두 가지에 대해 알아보자. 첫째, C++ 는 다중 상속을 지원하지만, Smalltalk 는 클래스에 하나의 직접 상위클래스만 허용한다. 다중 상속은 몇 가지 문제에 대해 즉각적인 해법(예: [디자인 패턴]에서 Adapter 패턴의 클레스에 대한 어댑터 버전)을 제공한다. Smalltalk 의 경우, 그러한 문제들에 대해 대안적 해법을 마련해야 한다. 다중 상속은 프로그래밍 언어에 복잡성을 더한다; 프로그래머들은 이름의 충돌을 어떻게 처리하는지, 반복된 종속을 처리하기 위해 어떠한 규약이 준비되어 있는지를 알아야 한다 (예: 동일한 상위클래스에서 상속된 두 클래스로부터의 상속). 다중 상속은 복잡성과 유용성 사이의 균형으로 인해 현대 Smalltalk 환경에서 고의적으로 배제되었다. 다중 상속의 유용성이 그 사용으로 인한 추가적 복잡성보다 크지 않다고 생각했기 때문이다 (특히 프로그램의 이해적 측면에서 볼 때). <br />
<br />
둘째, C++ 의 경우 함수의 동적 바인딩은 (상위클래스에서 선언되었으며, 하나 또는 그 이상의 하위클래스에 오버라이드된) 함수가 상위클래스에 가상으로 선언될 때만 작용된다. Rumbaugh et al. (1991)에서 언급되었듯이 이런 특성은 확장성과 점진적 재사용에 (특수화를 위해 다른 메서드를 오버라이딩하면서 상속을 통해 상위클래스의 행위 일부를 재사용)장애물이 될 수 있다. 클래스의 본래 프로그래머들이 메서드를 가상으로 선언하는 것은, 프로그래머가 아직 정의하지 않은 서브클래스가 오퍼레이션을 오버라이드할 가능성을 예상할 때 뿐이다ㅡ그리고 이러한 결정은 클래스에 정의된 모든 함수마다 이루어져야 한다 (Rumbaugh et al., 1991). <br />
<br />
<br />
<br />
'''대화형 개발 환경'''<br />
<br />
스몰토크 개발ㅡ주요 스몰토크 제품에서ㅡ은 항상 대화형 개발 환경에서 이루어진다. 여기에는 많은 의미가 함축되어 있는데, 실험의 용이, 검사, 프로그램 이해를 위한 도구의 이용성, 재사용 가능한 클래스와 메서드 발견이 포함된다. 설계와 관련해, 환경은 각 메서드를 저장 시 그에 대한 증분 컴파일을 제공하기 때문에 전적으로 하나의 디자인으로 결정하는 것을 피한다: 스몰토크 프로그래머들은 클래스에서 메서드를 수정하거나 추가하기 위해 전체 클래스의 소스를 재컴파일하거나 갖출 필요가 없다. 따라서 어디에 새로운 기능성이 위치해야 하는지에 관한 걱정을 덜 수 있다. 예를 들어, [디자인 패턴] 편에서 Visitor 패턴의 동기 중에는 새로운 오퍼레이션의 추가로 인하여 관련된 모든 클래스를 재컴파일해야 하는 경우를 소개한다; 따라서 이 기능성을 새로운 클래스에 위치시키고 기존 클래스의 인스턴스에 작용하도록 만드는 편이 낫다. 증분 컴파일은 주제와 관련 없는 문제를 제거함으로써 좀 더 책임 위주의 설계를 허용하므로 코드를 기능적으로 또는 논리적으로 위치시킬 수 있다. <br />
<br />
<br />
<br />
'''추상 클래스, private 메서드'''<br />
<br />
C++는 언어기반 특성을 포함하여 스몰토크에서는 지원하지 않는 바람직한 특성을 명시적으로ㅡ그리고 강제적ㅡ구현한다. 예를 들어 C++와 달리 스몰토크는 메서드의 privacy를 선언하고 강요하는 컴파일-시간 메커니즘을 제공하지 않는다. 프로그래머들이 메서드를 private로 작성할 수는 있지만 (예: comment) 외부 객체가 실제로 그 메서드를 호출하지 못하도록 막는 내장된 메커니즘은 없다. 마찬가지로 스몰토크에는 추상적이어야 하는 클래스의 인스턴스화를 막는 메커니즘이 없다. 스몰토크에서 C++ privacy 메커니즘에 대한 런타임 대체를 필요로 하기 때문에 위의 문제보다 더 복잡한 해법을 필요로 하는 Singleton과 같은 패턴에서 스몰토크가 어떤 역할을 하는지 살펴볼 것이다. <br />
<br />
두 언어 간에는 이 외에도 많은 차이점이 있고, 각각의 장단점이 있다. 전체적으로 보면, C++는 효율성을 중요시하고 프로그래머 오류를 피하도록 고안되었고 (Rumbaugh et al., 1991) 스몰토크는 좀 더 유연하게 사용할 수 있도록 개발되었다. 다시 말하지만, 둘 중 하나가 어떤 면에서 "더 낫다"고 밝히려는 것이 목적이 아니다. 오히려 결론은 이렇다: 두 언어는 서로 다르기 때문에 스몰토크 프로그래머들과 C++ 프로그래머들은 디자인 패턴의 예를 서로 다르게 설명하는 경향이 있다는 것이다.<br />
<br />
[[Category:DesignPatternSmalltalkCompanion]]</div>Onionmixerhttps://trans.onionmixer.net/mediawiki/index.php?title=DesignPatternSmalltalkCompanion:1.3&diff=5576DesignPatternSmalltalkCompanion:1.32018-07-19T13:43:24Z<p>Onionmixer: 검수 20180719 진행분</p>
<hr />
<div>===1.3 C++ != Smalltalk (또는 Smalltalk ~= C++)===<br />
<br />
스몰토크와 C++는 단순히 서로 다른 프로그래밍 언어가 아니다; 언어에 대한 설계와 언어로 프로그래밍하는 데에는 기본적인 차이가 있다. 둘 중 하나로 작업하는데 익숙한 설계자들은 디자인 문제와 해법을 서로 다르게 바라볼 것이다.<br />
<br />
개발자가 작업하는 언어가 문제 해법에 대해 생각하는 방식에 영향을 미칠 것이라는 주장을 자세히 살펴보자. 자연언어를 다루는 심리 언어학 영역을 가장 높은 수준에서 살펴보면, 언어와 사고는 서로 밀접하게 연결되어 있으며 서로 영향을 미친다는 사실을 재빠르게 확인할 수 있다. Benjamin Whorf (1956)는 "언어는 개인이 세상을 바라보는 방식 또는 개인이 사고하는 방식에 확실히 영향을 미친다"고 가정했다. 이러한 워프의 가설(Whorfian hypothesis)이 프로그래밍 언어에도 통용된다고 주장하는 사람들도 있음은 쉽게 짐작할 수 있다: 서로 다른 언어에 대한 다른 구문과 제어구조는 그 언어로 문제를 해결하는 방식에 영향을 미치는 것이다 (예: Curtis, 1985, 특히 6장 참조).<br />
<br />
이와 반대로 생각의 방식과 세계관도 언어에 영향을 미치는 것으로 보인다. 사실 많은 심리학 연구에서는 이것이 사실이라는 증거를 제시해왔다 (예: Anderson, 1985 참조). 따라서 Goldberg와 Kay (1977)가 주장한 바와 같이 객체지향 언어들은, 설계자들이 실세계에 대한 그들의 인식을 설명하는 모델에 더 가깝게 구축하도록 하기 때문에 "진화되었다"는 말이 사실일지도 모르겠다. 여러 저자들이 제시하였듯이 객체지향 디자인은 사람들이 문제 영역을 자연스럽게 모형화하는데 더 나은 짝을 제공하므로 절차지향식 언어의 설계보다는 좀 더 "자연적이다" (Rosson & Alpert, 1990; Cox, 1984). <br />
<br />
앞의 두 가지 관점을 모두 뒷받침하는 Soloway, Bonar, Ehrlich (1983)는 사람들이 프로그래밍 해법의 설계와 관련해 자연스럽게 생각하는 방식과 더 일치하는 인식을 제공하는 프로그래밍 언어가 보다 사용하기 수월하다고 설명하였다. 하지만 그들은 프로그래밍 언어가 개인이 선호하는 설계 전략을 변화시키며 특정 언어의 구성에 경험이 많을수록 디자인 선호도가 바뀐다는 사실도 발견했다. 결론은, 누군가가 구현하는 언어ㅡC++, Smalltalk 등으로ㅡ는 시스템과 애플리케이션에 대한 사고 및 설계 방식에 영향을 미칠것이라는 사실이다. Smalltalk 전문가들과 C++ 개발자들은 서로 다른 언어로 프로그램을 구성하는데 그치는 것이 아니라, 서로 다른 언어로 말한다. 예를 들어 Smalltalk 프로그래머에게 클래스 객체는 클래스를 나타내는 진실된 스몰토크 객체인 반면, C++ 개발자들에게 클래스 객체란 사용자가 정의한 클래스의 인스턴스이다 (즉 내장된 C-언어 데이터 타입이 아니라는 의미다). 두 언어와 환경 간에 개념적으로 중복되는 부분이 상당히 많음에도 불구하고, 서로 상당히 다른 의견을 포함하고, 다른 문제를 표면화시키며, 최종적으로 프로그램 설계에 대한 서로 다른 사고를 이끌어 낸다.<br />
<br />
반대로 설계 시점에서 목표 언어를 고려할 필요도 있다: "설계 시점에서 언어를 고려하지 않을 경우, 문제를 미해결 상태로 남길 수 있으며… 그러한 설계로 인해 형편 없는 프로그램이 되기도 한다" (Smith, 1996a). 목표 언어를 선택할 때 발생할 수 있는 제약과 기회를 고려하지 않고 설계하는 것은 실수일 것이다. 스몰토크의 경우 언어 자체뿐 아니라 클래스 라이브러리와 내장된 프레임워크도 고려해야 한다. 예를 들어, VisualWorks 의 Mode-View-Controller 프레임워크, Visual Smalltalk 의 Mode-Pane 프레임워크, IBM Smalltalk 의 Motif-style 상호작용 프레임워크를 생각해보자ㅡ각 프레임워크는 실제로 대화형 애플리케이션의 설계를 시작하기도 전에 특정 디자인의 결정이 필요로 하기도 한다.<br />
<br />
이러한 생각을 명심하고 Smalltalk 과 C++ 이 어떻게 다른지ㅡ구체적으로 말해, 두 언어가 어떻게 설계에 영향을 미치는지ㅡ간단히 살펴보도록 하자. 이를 살펴보는 목적은, 두 언어에는 수많은 기본적 차이가 있기 때문에, 개발자들이 문제에 대해 생각하고, 해법을 설계하며, 디자인 패턴을 구현하는 방식이 서로 다를 수 밖에 없다는 우리의 주장을 뒷받침하기 위함이다(어느 한 언어가 뛰어나다는 주장을 하려는 것이 아니며, 두 언어의 비교를 통해 그 중 한 가지 언어를 선호하는 팬들을 언짢게 만들었다면 미리 사과드린다). 이 과정에서 구체적 특성의 유무에 따라 영향을 받을 일부 패턴을 언급하고자 한다.<br />
<br />
<br />
<br />
'''순수한 객체지향 대 혼합 객체지향'''<br />
<br />
Smalltalk 은 "순수한" 객체지향 언어이다. Smalltalk 에서 모든 계산은ㅡ가장 원시적 수준(과 일반 프로그래밍 활동과 관련이 없는 수준)을 제외한ㅡ객체로 전송하는 메시지의 결과로만 발생된다. 모든 것은 객체이며, 여기에는 숫자, 문자, 문자열과 같은 원시 데이터 타입<sup>Primitive Data Type</sup>을 포함한다. 반면 C++ 는 복합 언어로서, 절차지향의 C언어 기반에 객체지향의 특성들이 추가된 언어이다. 언어에서 제공되는 비객체지향적 특성을 이용할 수 있기 때문에, 설계자들과 프로그래머들에게서 서로 다른 사고방식을 이끌어 낸다. 예를 들어, C++ 에서는 어떠한 클래스에도 소속되지 않은 전역함수를 가질 수 있는 반면, Smalltalk 에서는 각 기능 부분의 책임을 누가 지는지를 반영해야만 한다. 복합 언어를 이용한 접근법은, 개인이 객체지향과 절차지향 패러다임의 이점을 모두 이용하는 프로그램을 사용하도록 하며, 기존의 C 코드로의 인터페이스가 훨씬 쉽다. 반면 복합 언어가 아닌 순수한 언어는 이해가 쉽다 (Rosson & Alpert, 1990).<br />
<br />
<br />
<br />
'''객체로서의 클래스'''<br />
<br />
모든 것을 객체로 간주하는 Smalltalk 에서 Class 는 first-class 런타임 객체를 의미한다. 이 사실은 메시지 송수신이 가능하며, 일반적으로 어떤 연산이라도 포함<sup>participating</sup>시킬 수 있음을 의미<ref name="역자주1">Smalltalk 에서 모든 클래스는 객체의 unique instance 라는 사실</ref>한다. C++ 에서는 이런것이 불가능하다. Smalltalk 에서의 인스턴스 생성은 클래스 객체가 수행하는 업무 중의 하나지만, C++ 에는 언어 자체에 내장되어 있다. 따라서 Smalltalk 에서는 인스턴스의 생성과 동작간의 차이가 덜 명확하다: 가장 기본 형태에서 인스턴스 생성은 특수 동작(specialization of behavior)에 불과하지만, C++ 에서는 엄밀히 별도 구분된다. 대부분의 패턴이, Smalltalk 안에서 완전한 패턴(Abstract Factory, Singleton, Factory Method 패턴 등)의 형태를 한 객체를 클래스로 포함하고 있다는 것을 알 수 있다.<br />
<br />
<br />
<br />
'''성숙하고 포괄적인 클래스 라이브러리'''<br />
<br />
주요 Smalltalk 환경의 이점 중 하나로서, 수 년 간 다듬고 디버깅해온 거대한 기본 클래스<sup>base class</sup> 세트가 있다. 라이브러리의 광범위한 사용 결과, 라이브러리에 포함된 저수준의 추상 데이터 타입마저 시간이 지나면서 향상된 덕분에 광범위한 성능을 가지게 되었다. 이렇게 광범위한 기능성은 특정 설계 고려사항은 물론이며, 심지어 일부 디자인 패턴 구현에 대한 고려사항조차 없애버린다. 예를 들어, Smalltalk 에서는 기본 Collection 클래스가 자체 반복<sup>iteration</sup> 메서드를 제공하기 때문에. 내부 반복자<sup>Iterator</sup>를 설계하거나 구현할 필요가 없다. Composite 를 포함한 다른 패턴들도 광범위한 기본 클래스 라이브러리의 기능성을 재사용하기 때문에 혜택을 받을 수 있다. '모든 것은 객체다'라는 주제로 다시 돌아가보면, 숫자, 문자열, Collection과 같은 추상 데이터 타입은 언어 자체에 내장된 블랙박스 데이터 타입이 아니라, 기본 클래스 라이브러리에서 사용자가 수정이 가능한 클래스로서 구현된다. 즉 이러한 클래스 내에서 사용자만의 메서드를 정의함으로써 객체의 기능성을 향상시킬 수 있음을 의미한다. 예를 들어, 새로운 타입의 반복자가 필요하다면 새로운 Iterator 클래스를 정의하기보다는 Collection 에서 새로운 메서드를 작성할 수 있다.<br />
<br />
<br />
<br />
'''강한 타이핑 대 약한 타이핑'''<br />
<br />
C++ 는 강한 타이핑 유형의 언어이다; 모든 변수는 컴파일러에 선언되며, 특정 타입이나 특정 클래스에 속한다. Smalltalk 는 더 넓은 범위의 동적 (또는 지연) 바인딩의 형태를 이용한다. 변수는 특정 클래스의 것으로 선언되지 않으며, runtimeㅡ객체가 실제로 인스턴스화되어 변수에 의해 참조될 때ㅡ으로 시작되기 전까지는 특정 타입(클래스)와 관련되지 않는다. <br />
<br />
두 언어 모두, 어떤 단일 변수도 서로 다른 시점에서 서로 다른 클래스의 인스턴스를 가리킬 수 있다. 그러나 Smalltalk 에서는 전체 계층구조 안의 어떠한 클래스라도 그에 해당되는 인스턴스가 될 수 있지만, C++ 에서는 특정 기반 클래스 또는 그 클래스에서 비롯된 하위클래스의 인스턴스가 된다. Collection 내의 객체에서도 마찬가지다. C++ 의 경우 목록의 모든 객체는 특정 타입이나 클래스에(또는 그 하위클래스) 속해야 하는 반면, Smalltalk 의 경우에는 일반적으로 Collection 의 어떤 클래스라도 여러 종류의 인스턴스를 포함할 수 있다. 이러한 특징은 많은 패턴에서 중요하다. 예를 들어, Smalltalk 에서 Iterator 는 내부가 다형적이기 때문에 더 강력하며, element type 마다 서로 다른 유형의 Iterator 를 정의할 필요가 없다. 강한 타이핑과 약한 타이핑은 Composite, Command, Adapter 패턴에서도 비슷한 역할을 한다. 마지막 예로서, C++ 의 Adapter 는, 해당 Adaptee 의 유형을 선언해야 하기 때문에, 선언된 클래스 또는 그 하위클래스의 객체만 조정해야 하지만, Smalltalk 에서는 Adapter 가 Adaptee 로 보낸 메시지를 포함하는 인터페이스를 가진 어떤 클래스에도 Adaptee 는 소속될 수 있다.<br />
<br />
약한 타이핑 언어는 유연성이 크지만 강한 타이핑 언어의 이점은 이보다 훨씬 더 많다. 예를 들어, 약한 타이핑이란 타입 안전성이 떨어짐을 의미한다. 게다가 변수가 구체적인 클래스에 속하도록 선언하는 경우, 프로그램의 포괄적인 정적 분석과 컴파일 시간의 최적 활용성을 제공한다.<br />
<br />
<br />
<br />
'''블록(Block)'''<br />
<br />
블록<sup>block</sup>은 자신이 코드를 실행하라는 메시지를 수신하기 전까지는 실행되지 않는, Smalltalk 코드를 포함하는 객체이다. 즉, 코드를 포함한 블록은, 일반적으로 언어 명령문(statement)처럼 한 가지씩의 방법이 연속적으로 만나면 발생(sequential encounter)하는 것이 아니라 블록에게 명시적으로 value 값 메시지를 (또는 그것의 변형체로) 전송할 때까지 대기된다. 블록은 하나의 객체이기 때문에 코드로 생성시켜 다른 객체로 전달할 수 있으며, 따라서 블록은 다른 객체에 의해 평가<sup>evalute</sup>되도록 사용자가 코드의 일부를 밀어낼 수 있도록 해주는데, 이는 구체적인 상황이나 상태가 발생할 경우로 제한된다. 이러한 특징은 대부분의 경우 매우 유용하며 Iterator 와 같은 패턴에 사용되는 것을 확인하게될 것이다. 블록은 또한 특이한 코드를 하나의 클래스의 각 인스턴스에 연결시키는데도 효율적이다(행위를 클래스의 모든 인스턴스에 적용하는 메서드와 반대로). 이는 Adapter 패턴의 대체 가능한 어댑터에 적용된다. 심지어 블록 구조는 제어구조를 컴파일러가 정해진 조건문이나 루프 구조로 제한하기보다는 언어 내에서 우리 고유의 제어 구조를 정의하도록 해준다 (Ungar & Smith, 1987). C++를 포함해 대부분 언어에는 코드의 일부를 메시지에 응답할 수 있는 first-class 객체로 만드는 구조가 없다. <br />
<br />
<br />
<br />
'''반영(refelection)과 메타수준의 성능'''<br />
<br />
스몰토크는 스몰토크 환경 자체에 대한 정보를 얻을 수 있는 코드를 작성하도록 허용한다. 클래스와 메서드는 기본 클래스 라이브러리에 존재하므로 프로그램이 클래스 계층구조 내의 클래스 간의 관계, 최근 실행한 프로세스의 메서드, 또는 특정 클래스의 인스턴스가 이해하는 메시지를 검색하도록 허용한다. 이러한 성능은 스몰토크 프로그래머의 툴킷에서 중요한 구성요소가 된다. 이는 개발환경 자체의 반영적 도구를ㅡ클래스와 메서드 브라우저, 디버거ㅡSmalltalk 에 내장시킬 수 있게 한다. 이런 코드들은 클래스 라이브러리에 포함되어 있기 때문에 추후 필요에 따라 도구를 개량 및 재정의하거나, 프로그램의 이해를 위해 새로운 도구로 통합시킬 수도 있다(예: Carroll et al., 1990). 다시 한 번 언급하지만 개인의 기호에 따라 다르다. Smalltalk 사용자들은 Smalltalk 환경으로 새로운 프로그래밍 도구를 구축하고 통합시키는 반영적 성능의 사용에 익숙하다. <br />
<br />
예를 들어, 메타 수준의 구조인 doesNotUnderstand: 를 이용해서, 사용자는 좀 더 향상된 기능성으로 객체를 "꾸미는" 목적, 또는 하나의 객체가 다른 기계 또는 데이터베이스의 다른 위치에 존재하는 객체에 대해 Proxy 의 역할을 하기 위한 목적으로 만들어진 모든 메시지를 가로챈 뒤에, 원하는 외부 객체로 이 메시지의 전송 여부와 시기를 결정할 수 있다. 또한 메시지 선택기<sup>selector</sup>를 기호 형식으로 저장해서, 내장된 perform: 메시지(와 그 변형체)를 이용해 객체로 그 메시지를 언제라도 호출할 수도 있다. 이는 함수포인터를 이용해 함수를 불러오는 C++의 성능과 비슷한데, 차이가 있다면 스몰토크 버전에서는 메시지 서명의 기호적 표상을 이용할 수 있다는 것이다. Adapter, Observer, Command 패턴과 같은 스몰토크 구현에 사용된 perform:을 살펴볼 것이며, 선택기의 기호 버전을 사용할 수 있다는 사실은 Interpreter에서 중요한 역할을 할 것이다. <br />
<br />
<br />
<br />
'''상속 의미론'''<br />
<br />
두 언어에서 종속의 작용 방식에는 몇 가지 차이가 있다; 그 중 두 가지를 논하고자 한다. 첫째, C++는 다중 상속을 지원하는 반면 스몰토크는 클래스에 하나의 직접 슈퍼클래스만 허용한다. 다중 상속은 몇 가지 문제에 대해 즉각적인 해법을 제공한다 (예: [디자인 패턴]에서 Adapter 패턴의 클래스 어댑터 버전). 스몰토크의 경우, 그러한 문제들에 대해 대안적 해법을 고안해야 한다. 다중 상속은 프로그래밍 언어에 복잡성을 더한다; 프로그래머들은 이름의 충돌을 어떻게 처리하는지, 반복된 종속을 처리하기 위해 어떠한 규약이 준비되어 있는지 알아야 한다 (예: 동일한 슈퍼클래스에서 상속된 두 클래스로부터의 상속). 다중 상속은 복잡성과 유용성 간 균형으로 인해 현대 스몰토크 환경에서 고의적으로 배제되었다: 다중 상속의 유용성이 그 사용으로 인한 추가적 복잡성보다 크지 않다고 생각했기 때문이다 (특히 프로그램 이해적 측면에서 볼 때). <br />
<br />
둘째, C++의 경우 함수의 동적 바인딩은 (슈퍼클래스에서 선언되고 하나 또는 그 이상의 서브클래스에 오버라이드된) 함수가 슈퍼클래스에 가상으로 선언될 때만 작용한다. Rumbaugh et al. (1991)에서 언급하였듯 이는 확장성과 점진적 재사용에 (특수화를 위해 다른 메서드를 오버라이딩하면서 상속을 통해 슈퍼클래스의 행위 일부를 재사용) 장애물이 될 수 있다. 클래스의 본래 프로그래머들이 메서드를 가상으로 선언하는 것은, 프로그래머가 아직 정의하지 않은 서브클래스가 오퍼레이션을 오버라이드할 가능성을 예상할 때 뿐이다ㅡ그리고 이러한 결정은 클래스에 정의된 모든 함수마다 이루어져야 한다 (Rumbaugh et al., 1991). <br />
<br />
<br />
<br />
'''대화형 개발 환경'''<br />
<br />
스몰토크 개발ㅡ주요 스몰토크 제품에서ㅡ은 항상 대화형 개발 환경에서 이루어진다. 여기에는 많은 의미가 함축되어 있는데, 실험의 용이, 검사, 프로그램 이해를 위한 도구의 이용성, 재사용 가능한 클래스와 메서드 발견이 포함된다. 설계와 관련해, 환경은 각 메서드를 저장 시 그에 대한 증분 컴파일을 제공하기 때문에 전적으로 하나의 디자인으로 결정하는 것을 피한다: 스몰토크 프로그래머들은 클래스에서 메서드를 수정하거나 추가하기 위해 전체 클래스의 소스를 재컴파일하거나 갖출 필요가 없다. 따라서 어디에 새로운 기능성이 위치해야 하는지에 관한 걱정을 덜 수 있다. 예를 들어, [디자인 패턴] 편에서 Visitor 패턴의 동기 중에는 새로운 오퍼레이션의 추가로 인하여 관련된 모든 클래스를 재컴파일해야 하는 경우를 소개한다; 따라서 이 기능성을 새로운 클래스에 위치시키고 기존 클래스의 인스턴스에 작용하도록 만드는 편이 낫다. 증분 컴파일은 주제와 관련 없는 문제를 제거함으로써 좀 더 책임 위주의 설계를 허용하므로 코드를 기능적으로 또는 논리적으로 위치시킬 수 있다. <br />
<br />
<br />
<br />
'''추상 클래스, private 메서드'''<br />
<br />
C++는 언어기반 특성을 포함하여 스몰토크에서는 지원하지 않는 바람직한 특성을 명시적으로ㅡ그리고 강제적ㅡ구현한다. 예를 들어 C++와 달리 스몰토크는 메서드의 privacy를 선언하고 강요하는 컴파일-시간 메커니즘을 제공하지 않는다. 프로그래머들이 메서드를 private로 작성할 수는 있지만 (예: comment) 외부 객체가 실제로 그 메서드를 호출하지 못하도록 막는 내장된 메커니즘은 없다. 마찬가지로 스몰토크에는 추상적이어야 하는 클래스의 인스턴스화를 막는 메커니즘이 없다. 스몰토크에서 C++ privacy 메커니즘에 대한 런타임 대체를 필요로 하기 때문에 위의 문제보다 더 복잡한 해법을 필요로 하는 Singleton과 같은 패턴에서 스몰토크가 어떤 역할을 하는지 살펴볼 것이다. <br />
<br />
두 언어 간에는 이 외에도 많은 차이점이 있고, 각각의 장단점이 있다. 전체적으로 보면, C++는 효율성을 중요시하고 프로그래머 오류를 피하도록 고안되었고 (Rumbaugh et al., 1991) 스몰토크는 좀 더 유연하게 사용할 수 있도록 개발되었다. 다시 말하지만, 둘 중 하나가 어떤 면에서 "더 낫다"고 밝히려는 것이 목적이 아니다. 오히려 결론은 이렇다: 두 언어는 서로 다르기 때문에 스몰토크 프로그래머들과 C++ 프로그래머들은 디자인 패턴의 예를 서로 다르게 설명하는 경향이 있다는 것이다.<br />
<br />
[[Category:DesignPatternSmalltalkCompanion]]</div>Onionmixer