Blog
Insights and Technology
Our story, vision and perspectives on technology, design and business solutions.

Featured Articles

News
5 min read
Announcement : Spiria is certified SOC 2 Type 2
<div><h2>What is the certification SOC 2 ?</h2><p>SOC 2 (Service Organization Control 2) certification is a standard developed by the American Institute of Certified Public Accountants (AICPA) that assesses an organization's ability to manage the risks associated with the security, availability, processing integrity, confidentiality and privacy of the data it processes on behalf of its customers.</p><p>SOC 2 certification is based on five principles, known as trust criteria, which define the minimum requirements an organization must meet to ensure the security and quality of its services. These criteria are as follows:</p><ul> <li><strong>Security</strong>: the organization protects data against unauthorized access, modification, disclosure, damage or loss.</li> <li><strong>Availability</strong>: the organization ensures the availability and continuous operation of its services in accordance with customer agreements.</li> <li><strong>Integrity of processing</strong>: the organization processes data in a complete, valid, accurate, timely and authorized manner.</li> <li><strong>Confidentiality</strong>: the organization respects confidentiality commitments and obligations towards its customers and third parties concerning the data it processes.</li> <li><strong>Privacy protection</strong>: the organization respects the privacy principles defined by the AICPA and the laws in application concerning the collection, use, storage, disclosure and disposal of personal data.</li></ul><p>« Obtaining and maintaining the SOC 2 certification is to me like an ultramarathon, rather than a 100-meter sprint. It's a first step in a long and continuously evolving process. Cybersecurity, as a whole, requires rigour and constant attention to detail, which our team is ready to invest in. »</p><p>– Vincent Huard, Vice President of Data Management and Analytics</p><p>To receive the SOC 2 certification, an organization must undergo an independent audit by a qualified accounting firm to ensure that it complies with the trust criteria applicable to its services. The audit covers the conception and effectiveness of the controls put in place by the organization to ensure compliance with the five trust criteria.</p><h2>What is the difference between SOC 2 Type 1 and Type 2 ?</h2><p>There are two types of SOC 2 certification. Among other things, it is the duration of the audit that distinguishes them. SOC 2 Type 2 is covered by a more extensive and rigorous audit.</p><ul> <li>SOC 2 Type 1 certification attests that the organization complies with trust criteria on a given date. It assesses the conception of controls, but not their effectiveness over time.</li> <li>SOC 2 Type 2 certification attests that the organization meets the trust criteria over a defined period of time, generally from three to twelve months. It assesses not only the conception but also the effectiveness of controls, taking into account their actual use and evolution.</li></ul><p>In other words, SOC 2 Type 2 certification meets more demanding and rigorous criteria, as it involves continuous monitoring and regular verification of controls. It offers greater assurance of the quality and security of the services provided by the organization.</p><h2>What are the benefits for our clients ?</h2><p>By obtaining the SOC 2 Type 2 certification, Spiria reaffirms its position as a trusted partner in the development of digital solutions for its customers.</p><p>Here are some of the main benefits that enable our customers to undertake large-scale projects with peace of mind:</p><ul> <li>The guarantee that we uphold the highest standards of data security.</li> <li>The guarantee that we protect our customers' data against internal and external threats.</li> <li>The confidence that we ensure the availability and performance of our services.</li> <li>The confidence that we are able to react quickly and effectively in the case of an incident.</li> <li>The certainty that we treat your data with integrity, while complying with validation, accuracy, traceability and authorization rules.</li> <li>The peace of mind that we respect your confidentiality obligations and do not disclose your data to unauthorized third parties.</li> <li>The security of knowing that we respect privacy principles and comply with applicable laws on personal data.</li></ul><p>SOC 2 Type 2 certification is a guarantee of trust and security for our clients, testifying to our commitment to delivering quality services and upholding industry best practices. It represents excellence in data security across industries, and is becoming increasingly sought after for software development projects. It was therefore only natural for Spiria to be one of the few expert firms in North America to be certified.</p><p>We are proud to be certified and to guarantee the excellence, reliability and rigor of our business practices.</p><p>Start a project with confidence : <a href="mailto:NewProject@spiria.com">NewProject@spiria.com</a>.</p></div>

Strategy
5 min read
Choosing Between a Time-and-Materials or a Fixed-Price Contract
<div><p>Spiria teams have thorough and extensive experience with both types of projects. In this blog, we’ll share what we have learned on the subject over the years and what criteria contribute to the success of each option.</p><p>But first, let’s go over those two types of projects:</p><h3>Time & Materials projects</h3><p>These are projects whose scope (activities, deliverables, inclusions and exclusions, etc.) are moderately well defined. The initial proposal provides an estimated price range for completing the project, after which costs are billed based on actual hours worked plus the required hardware and resource expenses (such as software licenses or cloud services). This approach is more flexible, as it allows both parties to adjust or change the specifications throughout the development process. This encourages agility and puts an emphasis on project management controls.</p><h3>Fixed-price contracts</h3><p>In contrast, the scope of this kind of project is usually well or very well defined. The initial cost estimate can be stated with confidence because it is based on more reliable information than in the T&M project. As the name suggests, costs are established at the outset, regardless of the actual hours worked and the materials and other resources expenses. Therefore, risk and profitability are critical considerations in opting with this type of contract. Any change to the initial specifications is policed by a change-request process and is billed as additional work.</p><p>Let’s imagine a first scenario in which a project has been previously defined. The client would opt for T&M or Fixed-price, a decision sometimes dictated by the organization’s internal requirements or even by industry regulations. This is often the case with calls-for-tender, which are mostly Fixed-price. Whenever possible, Spiria suggests an approach that leads to a better understanding of the project’s scope, thus mitigating risk. Spiria could recommend that the client invest in an initial discovery phase, whether in T&M or in Fixed-price mode, then propose the actual development and deployment phases as Fixed-cost. This helps the client assess whether it needs to change priorities or modify the scope as a result of the discovery phase. This flexibility allows us to negotiate the defined scope while amending the inclusions/exclusions, in order to remain within the agreed contractual Fixed-cost budget.</p><p style="text-align: center;"><picture><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/11800/process-en.400x0.webp" media="(max-width: 599px)"><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/11800/process-en.760x0.webp" media="(max-width: 999px)"><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/11800/process-en.1039x0.webp" media="(min-width: 1000px)"><img src="https://mirror.spiria.com/site/assets/files/11800/process-en.webp" style="width: 60%; border: none;" alt="A Typical Project Cycle." title="A Typical Project Cycle."></source></source></source></picture></p><p style="text-align: center; font-style: italic;">Figure 1. A Typical Project Cycle.</p><p>In a second case where the type of contract is not predetermined, we have more latitude to choose our strategy. A client schedules meetings with various suppliers for a Q&A session, followed by internal discussions to evaluate the factors leading to the best strategy. To help the teams decide, the table below presents a non-exhaustive list of criteria that are quantifiable (easily identifiable and measurable) or qualitative. The answers will depend on the information provided during the initial meetings and in the specifications, and on information obtained by asking the client directly. The symbols in the two right-hand columns suggest ways to weigh the answers relative to the two types of projects.</p><table cellpadding="0" cellspacing="0" style="width:100%"> <tbody> <tr> <td style="width:76%"><strong>Points</strong></td> <td style="width:12%"><strong>Fixed</strong></td> <td style="width:12%"><strong>T&M</strong></td> </tr> <tr> <td>The business plan, requirements, needs and expectations are clear.</td> <td>➕➕</td> <td>➕</td> </tr> <tr> <td>The business rules and processes are numerous and complex.</td> <td>➕</td> <td>➕➕</td> </tr> <tr> <td>The client’s budget is defined and budget planning is set.</td> <td>➕</td> <td>➖</td> </tr> <tr> <td>The schedule is tight or critical due to the client’s circumstances or business context.</td> <td>➕</td> <td>➖</td> </tr> <tr> <td>The required expertise is clearly defined.</td> <td>➕</td> <td>➕</td> </tr> <tr> <td>The organizational and decision-making structure is large and complex.</td> <td>➖</td> <td>➕</td> </tr> <tr> <td>The legal aspects are complex.</td> <td>➖</td> <td>➕</td> </tr> <tr> <td>A past relationship already exists, or a mutual contact recommended us.</td> <td>➕</td> <td>➕</td> </tr> <tr> <td>The risk, uncertainties and contingencies are high.</td> <td>➖</td> <td>➕</td> </tr> <tr> <td>There is a high likelihood of scope-creep.</td> <td>➖</td> <td>➕</td> </tr> <tr> <td>The client has staff or other internal capacity<br> (designer, development team, QA, etc).</td> <td>➕</td> <td>➕</td> </tr> <tr> <td>The technological environment is familiar.</td> <td>➕</td> <td>➕</td> </tr> <tr> <td>There are significant technological constraints (e.g. legacy system).</td> <td>➖</td> <td>➕</td> </tr> <tr> <td>There are many and complex challenges to integrating the solution.</td> <td>➖</td> <td>➕</td> </tr> <tr> <td>The choice of technology is pre-established.</td> <td>➕</td> <td>➕</td> </tr> <tr> <td>Data is available to reliably do quality assurance.</td> <td>➕</td> <td>➕</td> </tr> <tr> <td>The solution is subject to special certifications.</td> <td>➖</td> <td>➕</td> </tr> </tbody></table><p><br>This reflection can lead to different approaches, represented in the following diagram:</p><p><picture><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/11800/strategies-en.400x0.webp" media="(max-width: 599px)"><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/11800/strategies-en.760x0.webp" media="(max-width: 999px)"><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/11800/strategies-en.1039x0.webp" media="(min-width: 1000px)"><img src="https://mirror.spiria.com/site/assets/files/11800/strategies-en.png" style="width: 100%; border-style:solid; border-width:1px;" alt=" Possible strategies or approaches." title=" Possible strategies or approaches."></source></source></source></picture></p><p style="text-align: center; font-style: italic;">Figure 2. Possible strategies or approaches (click to enlarge).</p><p>The strategy selected dictates how the contract agreement is concluded and has implications for the entire life of the project and its final success. The relationship will start out on the right foot if our process is transparent and we can explain our reasoning to the client. Our ultimate objective is to deliver a project that respects our Spirian values and that provides the expected value to the client.</p></div>
All Articles
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Culture
5 min read
FAX makes a big comeback
<div><p>We gave serious thought to our future needs and what our work life would look like after the pandemic. In addition to the new digs, we’ve invested in high-quality, ergonomic office furnishings and layouts. This was not an impulse decision; we took the time to visualize the lifestyle that awaits us in the Spiria office.</p><p>We sensed that after the lockdowns and the inevitability of telework, staff might need a little FAX (cute, right? 😉):</p><ul> <li>Flexibility</li> <li>Autonomy</li> <li>Experience</li></ul><p>With FAX in mind, we tried to imagine our future as workplace Spirians and developed our philosophy on post-pandemic teleworking:</p><h2>Flexibility</h2><p>For the past year, we have been free to work where we see fit and to adjust our schedules accordingly. It’s a freedom we appreciate and there are no plans to radically restrict it. Rather, we’re giving it a framework. We are aiming for 50% attendance. Call it half the week, half the month, half the year: it’s all about flexibility. Working vacation? Knock yourself out.</p><h2>Autonomy</h2><p>With flexibility comes autonomy. You be the judge of when your presence is needed in the office. There’s no hallway monitor to take attendance and count days at <a href="https://www.spiria.com">Spiria</a>. In fact, the criteria for making the call of whether to go into the office is as basic as CX and EX (<i>Customer Experience</i> and <i>Employee Experience</i>). The question to ask is this: “Will my being in the office help support my clients’ needs and my colleagues’ efforts?” Yes? Get in there. We know you’ll do the right thing. Trust is part of our corporate culture.</p><h2>Experience</h2><p>Since March 2020, we’ve spent time and money decking out our home offices. It won’t be easy to leave our creature comforts. Of course, fabulous offices will go a long way to win you over. So it’s time to recast office life. You’re not coming all this way to sit in a cubicle. You want to experience a workplace with a twist.</p><p>The Montreal campus with its fitness center and restaurant will give you exactly that Office 2.0 experience. Our office-culture team is planning happenings such as competitions, team lunches, happy hours, and other events that will bring us closer again. After a year of Zoom life, we’re ready for Real-life. We are craving unadulterated human interactions. We really believe that people want to see one another screenless and unfiltered. Remember when our conversations overlapped, interrupted, and paralleled one another? Just like bantering over a family dinner – and have we ever missed that too!</p></div>

Culture
5 min read
Spiria Montreal is moving into the new Fabrik8 building!
<div><h2>The Fabrik8 concept</h2><p>Co-founded by Pierre-Antoine Fernet and Vanessa Brochu, Fabrik8 is a shared workspace that welcomes both freelancers and SMEs of all types. The space facilitates synergies between the entrepreneurs who call it home and offers many services to promote the well-being of workers.</p><p>Originally located on Saint-Urbain Street, Fabrik8 made a splash by meeting the needs of young, dynamic companies in search of a stimulating work environment. Due to growing demand, it moved to a former industrial building on Waverly Street with a square footage of over 3,000 m<sup>2</sup> (32,000 ft<sup>2</sup>). The space offered teams of 1 to 35 employees several flexible configurations for closed offices and for common areas such as conference rooms, cafeteria, game rooms, and relaxation rooms.</p><p>Fabrik8’s meteoric rise didn’t stop there. Its founders set out to build an ambitious 18,600 m<sup>2</sup> (200,000 ft<sup>2</sup>) office complex on the same site, in order to offer unique services to the companies it hosts. The plans for the new buildings were entrusted to the architecture firm of Rocio H. Venegas, who is responsible for the LEED-certified renovation of 7250 Marconi Street, also in the neighborhood, which houses Gameloft Montreal. Construction is taking place in two phases. The first, on Jean-Talon Street, was completed this winter and will welcome Spiria in a few months; the second, on De Castelnau Street, will begin this spring, once all the firms currently housed by Fabrik8 have transferred into the first-phase building.</p><p><picture><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6353/fabrik8_lounge.400x0.webp" media="(max-width: 599px)"><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6353/fabrik8_lounge.760x0.webp" media="(max-width: 999px)"><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6353/fabrik8_lounge.1039x0.webp" media="(min-width: 1000px)"><img src="https://mirror.spiria.com/site/assets/files/6353/fabrik8_lounge.webp" style="width: 100%; border-style:solid; border-width:1px;" alt="Lounge and café at Fabrik8." title="Lounge and café at Fabrik8."></source></source></source></picture></p><p>Lounge and café. © Fabrik8.</p><p><picture><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6353/fabrik8_sport.400x0.webp" media="(max-width: 599px)"><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6353/fabrik8_sport.760x0.webp" media="(max-width: 999px)"><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6353/fabrik8_sport.1039x0.webp" media="(min-width: 1000px)"><img src="https://mirror.spiria.com/site/assets/files/6353/fabrik8_sport.webp" style="width: 100%; border-style:solid; border-width:1px;" alt="The gym at Fabrik8." title="The gym at Fabrik8."></source></source></source></picture></p><p>The gym. © Fabrik8.</p><p>The new complex offers three levels of office space for companies of 1 to 50 people and three levels for larger companies, which have the option of renting an entire, customizable floor (1,765 m<sup>2</sup>, 19,000 ft<sup>2</sup>) or only a portion thereof. Its added value is that it complies with WELL certification, which is the first building standard focused on improving the health and well-being of the occupants. To earn this seal of excellence, Fabrik8 and the architect worked on many aspects of the building environment: thermal, acoustical and visual comfort, as well as lighting, air and water quality, among others. What’s more, the overall concept needs to integrate physical activity and healthy eating habits into daily life. A fully equipped sports center in the penthouse, as well as a spacious health-food cafeteria on the ground floor are available and accessible to all tenants, and active mobility is encouraged, with a large, secure bicycle parking space.</p><p><picture><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6353/fabrik8_hockey.400x0.webp" media="(max-width: 599px)"><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6353/fabrik8_hockey.760x0.webp" media="(max-width: 999px)"><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6353/fabrik8_hockey.1039x0.webp" media="(min-width: 1000px)"><img src="https://mirror.spiria.com/site/assets/files/6353/fabrik8_hockey.webp" style="width: 100%; border-style:solid; border-width:1px;" alt="The rooftop ice rink at Fabrik8." title="The rooftop ice rink at Fabrik8."></source></source></source></picture></p><p>The rooftop ice rink. © Fabrik8.</p><p>And finally, the cherry (literally) on top... A completely unique feature that sets Fabrik8 apart from all other office buildings: the ice rink on the roof. Yep, you read that right, a real ice rink (Zamboni included), that in the warmer months converts to a multisport surface for basketball, soccer or handball games with colleagues.</p><h2>Why Fabrik8?</h2><p>Spiria cares deeply about the well-being of its employees and saw a natural fit with the concepts and values embodied by Fabrik8. By occupying an entire floor, we will enjoy a functional, fun and inspiring work environment that surpasses what most companies can provide. The build-out was entrusted to Ædifica, a design and architecture firm specializing in work environments that has already collaborated with large companies such as L’Oréal Canada, IBM, Air Transat, Bell, CN, Sanofi, WB Games, and others. The choices of furniture, materials, colors and space allocation were made in consultation with a committee of Spiria personnel to ensure that our future offices truly make everyone feel at home, at ease, and free to have fun.</p><p><picture><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6353/aedifica_1.400x0.webp" media="(max-width: 599px)"><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6353/aedifica_1.760x0.webp" media="(max-width: 999px)"><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6353/aedifica_1.1039x0.webp" media="(min-width: 1000px)"><img src="https://mirror.spiria.com/site/assets/files/6353/aedifica_1.webp" style="width: 100%; border-style:solid; border-width:1px;" alt="Spiria office at Fabrik8." title="Spiria office at Fabrik8."></source></source></source></picture></p><p>© Ædifica.</p><p>All Spirians will enjoy a whole new range of services that our current facilities in the former Kiddies Togs garment factory can’t provide. Covered parking, gym with fitness classes, skating rink, cafeteria, terraces, ultramodern environment… so many new perks that will enhance our quality of life, without any loss of the advantages of the neighborhood that we know and love, like its proximity to the Parc and De Castelnau subway stations for easy access, Jarry Park for airing out our brains or for picnics with colleagues, the best food market in town, microbreweries whose beer list we know by heart, satisfying bowls at Soupson, and much more.</p><p><picture><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6353/aedifica_2.400x0.webp" media="(max-width: 599px)"><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6353/aedifica_2.760x0.webp" media="(max-width: 999px)"><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6353/aedifica_2.1039x0.webp" media="(min-width: 1000px)"><img src="https://mirror.spiria.com/site/assets/files/6353/aedifica_2.webp" style="width: 100%; border-style:solid; border-width:1px;" alt="Spiria office at Fabrik8." title="Spiria office at Fabrik8."></source></source></source></picture></p><p>© Ædifica.</p><p>Sounds like fun, doesn’t it? Do you want to work in an exceptional environment within a company that is no less exceptional? We have many positions currently available, so go ahead and <a href="https://www.spiria.com/en/career/">check them out</a>. Perhaps you too will get to stretch out in the atmosphere that Spiria and Fabrik8 have created.</p></div>

Design
5 min read
Five design trends for 2021
<div><h2>Greater use of drawings</h2><p><picture><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6260/zahidul.400x0.webp" media="(max-width: 599px)"><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6260/zahidul.760x0.webp" media="(max-width: 999px)"><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6260/zahidul.1039x0.webp" media="(min-width: 1000px)"><img src="https://mirror.spiria.com/site/assets/files/6260/zahidul.webp" style="width: 100%; border-style:solid; border-width:1px;" alt="Zahidul/Dribbble." title="Zahidul/Dribbble."></source></source></source></picture></p><p>© <a href="https://dribbble.com/zahidvector">Zahidul/Dribbble</a>.</p><p>This trend made its appearance a few years ago, has gained steady ground and hopefully will stay the course. More and more sites are updating and revamping their image using drawings. Up until the 60s, drawings were king in advertising and others means of communication. Then, photography took over. Now, we’re reverting back to drawings. I think this is a good thing, since abstract drawings have the power to bring us together: by resembling no-one, they include everyone.</p><p>Here are some on-line drawing libraries:</p><ul> <li><a href="https://www.humaaans.com">Humaaans</a></li> <li><a href="https://www.ls.graphics/illustrations">Lstore Graphics: Free and Premium Illustrations</a></li> <li><a href="https://s.muz.li/NzA3YjhkNWEy">Open Peeps</a></li> <li><a href="https://icons8.com/illustrations">Icons8 Illustrations</a></li> <li><a href="https://www.getillustrations.com">Get Illustrations</a></li></ul><h2>3D and isometric design</h2><p><picture><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6260/isometric.400x0.webp" media="(max-width: 599px)"><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6260/isometric.760x0.webp" media="(max-width: 999px)"><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6260/isometric.1039x0.webp" media="(min-width: 1000px)"><img src="https://mirror.spiria.com/site/assets/files/6260/isometric.webp" style="width: 100%; border-style:solid; border-width:1px;" alt="Peter Tarka/Dribbble." title="Peter Tarka/Dribbble."></source></source></source></picture></p><p>© <a href="https://dribbble.com/tarka">Peter Tarka/Dribbble</a>.</p><p>Just like drawings, 3D is not exactly a new trend, but now, it’s definitely here to stay. Especially when it’s getting easier and easier to create 3D images, even for UI designers who are not familiar with it. <a href="https://spline.design">Spline</a>, for example, is a tool that makes 3D design easily accessible (in beta version at the moment).</p><p>3D design lets you turn a concept into an apparently finished product, allowing users to truly envision it. 3D illustrations and other visuals will no doubt become pervasive, especially with the advent of virtual and augmented reality.</p><p>Everything can now be 3D-produced without ever using actual objects. This can mean quite the savings, for example for companies unveiling a luxury car or a future real-estate project. Most crucially, 3D design piques users’ interest and makes Web sites and interfaces more attractive.</p><h2>Colourful design</h2><p><picture><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6260/deut-huit-huit.400x0.webp" media="(max-width: 599px)"><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6260/deut-huit-huit.760x0.webp" media="(max-width: 999px)"><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6260/deut-huit-huit.1039x0.webp" media="(min-width: 1000px)"><img src="https://mirror.spiria.com/site/assets/files/6260/deut-huit-huit.webp" style="width: 100%; border-style:solid; border-width:1px;" alt="Deux Huit Huit." title="Deux Huit Huit."></source></source></source></picture></p><p>Site de <a href="https://deuxhuithuit.com/fr/">Deux Huit Huit</a>.</p><p>Colourfully speaking, so many trends have come and gone. Some of them went for austerity: remember the black-and-white phase? The monochrome one? Now, bold and vibrant colours take the stage. Check out the designs of <a href="https://carrenoir.com">Carré noir</a>, or the site of design agency <a href="https://deuxhuithuit.com/fr/">Deux Huit Huit</a>, or the thousands of references on <a href="https://dribbble.com">Dribbble</a>; so many bright, colourful designs. Do not fear the huge expanses of saturated pigment, the neon effects or the vibrant hues: 2021 is a colourful year.</p><h2>Minimalism</h2><p><picture><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6260/revolut.400x0.webp" media="(max-width: 599px)"><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6260/revolut.760x0.webp" media="(max-width: 999px)"><source type="image/webp" srcset="https://mirror.spiria.com/site/assets/files/6260/revolut.1039x0.webp" media="(min-width: 1000px)"><img src="https://mirror.spiria.com/site/assets/files/6260/revolut.webp" style="width: 100%; border-style:solid; border-width:1px;" alt="Revolut." title="Revolut."></source></source></source></picture></p><p>Site de <a href="https://www.revolut.com/">Revolut</a>.</p><p>2021 is also a year of minimalism. Graciously layered text, pleasingly-proportioned margins, legible and clear interfaces: an excellent choice as epitomized by <a href="https://www.sketch.com">Sketch</a>, <a href="https://theordinary.deciem.com">The Ordinary</a> and <a href="https://weaintplastic.com">We Ain’t Plastic</a>. These sites are proof that you don’t need a hyper-complex UI to have the “Wow” factor.</p><p>Here’s the secret to minimalist design: “<a href="https://uxdesign.cc/a-guide-to-minimalist-design-36da72d52431">A guide to minimalist design - The reign of white space</a>.”</p><h2>Microinteraction</h2><img data-gifffer="/site/assets/files/6260/shot.gif" style="width: 100%; border-style:solid; border-width:1px;" data-gifffer-alt="Aaron Iker." title="Aaron Iker."><p>© <a href="https://dribbble.com/ai">Aaron Iker</a>.</p><p>Microinteraction means paying attention to all the tiny details that delight the user, that create a moment that is engaging and welcoming. As users are increasingly blasé with complex and sophisticated animations, better bank on small, animated details to engage them. For example, animation can be used to show the changed state of a button the user clicks on, to mark the toggling between two pages, or even the progression through different stages in a process. Prepare to see a lot of such animations in 2021.</p></div>

Dev's Corner
5 min read
C++: The Worst of Both Worlds
<div><p>In contrast, Python is often described as one of the easiest programming languages to learn and read. It is also very dynamic, allowing any data to pass around to any function very easily. Better yet, functions can be defined and re-defined at run-time. When you call a function, you are never sure what code will <i>really</i> be executed.</p><p>So, why not combine the two? Combine the complexity and obscure syntax of C++ templates with Python dynamic typing and dynamic extensibility? If this kind of revolting brew interests you, then I’ve got something for you!</p><h2>Now, Seriously…</h2><p>The real story is that I wanted to have overloaded function resolution at run-time instead of compile-time.</p><p>I wanted to create a dynamic function-dispatch system that could call a function with any data. I wanted this system to be dynamically extensible to allow new overloads of the function for new data types at any time. I also wanted to be able to call the functions with either concrete data or a bunch of <code>std::any</code>. I also wanted all this to be reasonably efficient.</p><p>To achieve all these goals, I turned to templates. Not just simple ones though, but rather the more complex variadic templates.</p><h2>Variadic Template Syntax</h2><p>What’s a variadic template? Normal templates require a fixed number of types as argument. That’s fine if you know in advance how many types you’ll need. Variadic templates, on the other hand, accept a variable number of types as argument. It can be zero, one, two, or any number of types.</p><p>Besides receiving these types, templates must also be able to use them. As you may already know, templates are compile-time beasts. They need to work without modifying any data. Therefore, to manipulate a variable number of types, C++ needed a new syntax. The new syntax for both receiving the types and using them was created around the ellipsis: <code>...</code>.</p><p>The basic idea is that whenever the ellipsis is used, the C++ compiler knows it has to repeat the surrounding piece of code as many times as needed for each type. For example, the types of the variadic template are received with the ellipsis. In the following example, the <code>VARIA</code> template argument represents any number of types.</p><pre><code>template <class... VARIA>struct example{ // template implementation.};</code></pre><p>Later on, in the template implementation, the variadic type arguments can be used in the code with the ellipsis. For example, the variadic template above could have a function that receives arguments of the corresponding types and passes these values into a call to another function, like this:</p><pre><code>// Receive a variable number of arguments…void foo(VARIA... function_arguments){ // … and pass them on to another function. other_bar_function(function_arguments...);}</code></pre><p>These examples only scratch the surface of what is possible with variadic templates, but they are sufficient for the purposes of this article.</p><h2>Dynamic Dispatch Design</h2><p>Before delving into the design of our dynamic dispatch, we need to outline our requirements more precisely. I said the dynamic dispatch should mimic the compile-time function overload of C++. What does that mean exactly? Well, here are the features of an ideal design:</p><ul> <li>The function itself is declared at compile-time and is referred to by its name, like a normal function.</li> <li>The number of arguments of a given function can vary.</li> <li>Each overload of a function can return different types of values.</li> <li>Each such function can be overloaded for any type.</li> <li>New function overload can be added dynamically, at run-time, for any type.</li></ul><p>While these requirements would be sufficient for our purposes, there were a few additional use cases I wanted to cover. The first was to support function arguments that always have the same type. For example, a text-streaming function would always receive a <code>std::ostream</code> argument. The second was to be able to select a function implementation without having to pass a value to the function. This would allow specifying the return type of the function or implementing functions that take no argument at all. I’ll be showing you an example of each of these cases later on.</p><p>To support these use cases, we added two features to the list:</p><ul> <li>Not all arguments have to play a part in the type-based selection of the function.</li> <li>Some additional types <i>can</i> play a part in the typed-based selection of the function without being an argument.</li></ul><p>The result should look like a compile-time function overload. For example, here is how a call to the dynamic-dispatch <code>to_text</code> function looks like:</p><pre><code>std::wstring resultat = to_text(7);// result == "7"std::any seven(7);std::wstring resultat = to_text(seven);// result == "7"</code></pre><p>This apparent simplicity is supported by a lot of complex code behind the scenes.</p><h2>Smooth Operator</h2><p>Before demonstrating the implementation of the dynamic dispatch, we’ll show what it looks like from the viewpoint of the programmer creating a new operation. How do we create a new function?</p><p>To create a new operation called <code>foo</code>, declare a class to represent it. For the purposes of our example, we named it <code>foo_op_t</code>, derived from <code>op_t</code>. The <code>foo_op_t</code> class identifies the operation. It can be entirely empty. Afterward, we can write the <code>foo</code> function, the real entry-point for the operation. That is the function that the user of the <code>foo</code> operation will call. This function only needs to call <code>call<>::op()</code> (for concrete values) or <code>call_any<>::op()</code> (for <code>std::any</code> values), both of which are found in <code>foo_op_t</code>, which takes care of the dynamic dispatch:</p><pre><code>struct foo_op_t : op_t<foo_op_t> { /* empty! */ };inline std::any foo(const std::any& arg_a, const std::any& arg_b){ return foo_op_t::call_any<>::op(arg_a, arg_b);}template<class A, class B, class RET>inline RET foo(const A& arg_a, const A& arg_b){ std::any result = foo_op_t::call<>::op(arg_a, arg_b); // Note: we could test if the std::any really contains // a RET, instead of blindly trusting it. return any_cast<RET>(result);}</code></pre><p>Note that the base class of the new operation takes the operation itself as a template parameter. This is a well-known trick in template programming. In fact, it is so well-known that it even has a name: the curiously recursive template pattern. In our case, this trick is used so that the <code>op_t</code> can refer to the specific operation being used.</p><p>Now, we can create overloads of the <code>foo</code> operation. This is done by calling <code>make<>::op</code> with a function that implements the overload. To create an overload that takes types <code>A</code> and <code>B</code> and returns the type <code>RET</code>, we call <code>make<>::op<RET, A, B></code>. This registers the overload in the <code>foo_op_t</code> class. As an example, let’s implement our <code>foo</code> operation for the type <code>int</code> and <code>double</code> and make it return a <code>float</code>:</p><pre><code>// Some code in your program that implements the operation.float foo_for_int_and_double(int i, double d){ return float(i + d);}// Registration!foo_op_t::make<>::op<float, int, double>(foo_for_int_and_double);</code></pre><p>Of course, we could make the code shorter by writing the implementation right there in the call to <code>make<>::op</code>, with a lambda:</p><pre><code>foo_op_t::make<>::op<float, int, double>( [](int i, double d) -> float { return float(i + d); });</code></pre><p>In case you were wondering why the <code>call<></code> and <code>make<></code> take the template sigils, it’s because they are themselves variadic templates. The optional template arguments are the extra selection types used to choose a more specific overload based on types that are not passed as an argument to the <code>foo</code> operation. We will explain this in greater detail later.</p><p>Now, we are finally ready to get into the meat of the subject: implementing the dynamic function dispatch.</p><h2>Enter Selector</h2><p>The first problem to tackle is how each overload is identified within a function family. The obvious solution is to identify it by the types, or by its argument and extra selection types. C++ provides the <code>std::type_info</code> and <code>std::type_index</code> to identify a type. What we need is a <code>tuple</code> of these <code>type_index</code>. We achieve that with a pair of templates: the type converter and the selector.</p><p>The type converter maps any type to <code>std::type_index</code>. It is a very idiomatic trick in template programming, where each step in an algorithm is implemented in a type so that it can be executed at compile-time. Below is the converter, converting any type <code>A</code> into a <code>type_index</code> or <code>std::any</code>:</p><pre><code>template <class A>struct type_converter_t{ using type_index = std::type_index; using any = std::any;};</code></pre><p>The full type selector can then be written as a variadic template by applying the converter to all types given as argument and declaring a <code>tuple</code> type named <code>selector_t</code> with the result. It uses both the functions argument type, <code>N_ARY</code>, and the extra selection types, <code>EXTRA_SELECTORS</code>, to create the full selector.</p><pre><code>template <class... EXTRA_SELECTORS>struct op_selector_t{ template <class... N_ARY> struct n_ary_t { // The selector_t type is a tuple of type_index. using selector_t = std::tuple< typename type_converter_t<EXTRA_SELECTORS>::type_index..., typename type_converter_t<N_ARY>::type_index...>; };};</code></pre><p>Note how the ellipsis is applied to the line:</p><pre><code>typename type_converter_t<EXTRA_SELECTORS>::type_index...</code></pre><p>How the C++ language applies the ellipsis is part of the black magic of variadic templates. Sometimes, you will have to go by trial and error to see what works and what doesn’t.</p><p>Now we have a selector type, but how do we use it? To do this, we provide a few functions. The goal is to have a function that creates a selector pre-filled with concrete types. Naturally, we call our function <code>make</code>:</p><pre><code>template <class... EXTRA_SELECTORS>struct op_selector_t{ template <class... N_ARY> struct n_ary_t { template <class A, class B> static selector_t make() { return selector_t( std::type_index(typeid(EXTRA_SELECTORS))..., std::type_index(typeid(N_ARY))...); } };};</code></pre><p>Since we want to support calls with <code>std::any</code>, we need to provide a <code>make_any</code> function with <code>std::any</code> as input. (For optimization purposes, a version with the extra selector already converted to <code>type_index</code> is provided and named <code>make_extra_any</code>, but it is not shown here.)</p><pre><code>static selector_t make_any(const typename type_converter_t<N_ARY>::any&... args){ return selector_t( std::type_index(typeid(EXTRA_SELECTORS))..., std::type_index(args.type())...);}</code></pre><h2>Diving into Delivery</h2><p>We can finally dive into the mechanical details of the registration and calling of the operations. The operation base class is declared as a template taking the operation itself and a list of optional unchanging extra arguments, <code>EXTRA_ARGS</code>, which have fixed types. (Remember our earlier streaming operation example, which always received a <code>std::ostream</code>.)</p><pre><code>template <class OP, class... EXTRA_ARGS>struct op_t{ // Internal details will come next...};</code></pre><p>Let’s first show a few types that are used repeatedly: the selector class, <code>op_sel_t</code>, the selector tuple, <code>selector_t</code> and the internal function signature of the operation, <code>op_func_t</code>.</p><pre><code>using op_sel_t = typename op_selector_t<EXTRA_SELECTORS...>::template n_ary_t<N_ARY...>;using selector_t = typename op_sel_t::selector_t;using op_func_t = std::function<std::any(EXTRA_ARGS ..., typename type_converter_t<N_ARY>::any...)>;</code></pre><p>This illustrates some of the inherent complexity of template programming. Many of its parts would normally be totally unnecessary, but are nevertheless required due to the internal workings of templates. For example, <code>typename</code> is necessary to tell the compiler that what follows really is a type. This happens when a template refers to elements of another template. The C++ syntax is too ambiguous to let the compiler infer that we are using a type. Another very peculiar aspect is the extra <code>template</code> keyword right before accessing <code>n_ary_t</code>: it is needed to let the compiler know that it really is a template.</p><p>We’re now ready to explain the whole system, which is put together with just a few functions:</p><ul> <li>A public way to call the operation: <code>call<>::op</code></li> <li>A public way to make a new overload: <code>make<>::op</code></li> <li>A private way to lookup the correct overload: <code>get_ops</code></li></ul><p>We will tackle each in reverse order, from the lowest implementation details up to the final operation: calling an overload.</p><h2>Keeper of Wonders</h2><p>The lowest implementation detail is the function that holds the available, pre-registered overloads. There is a very important reason why <code>get_ops</code> needs to exist: the problem is that the overloads need to be kept in a container, while the operation base class is a template. We cannot keep all overloads for all operations together. Fortunately, the C++ language specifies that a static variable contained in a function in a template is specific to each instantiation of the template. This lets us hide the registration location within. The <code>get_ops</code> safely holds our list of overloads:</p><pre><code>template <class SELECTOR, class OP_FUNC>static std::map<SELECTOR, OP_FUNC>& get_ops(){ static std::map<SELECTOR, OP_FUNC> ops; return ops;}</code></pre><p>The fact that it is templated over <code>SELECTOR</code> and <code>OP_FUNC</code> allows the operation to be overloaded for any number of arguments.</p><h2>Making Up Your Op</h2><p>The <code>make<>::op</code> function is a template that takes a concrete overload for concrete types, wraps it into the internal function signature and registers it. The wrapping takes care of converting the <code>std::any</code> arguments into concrete types. This is safe, since the concrete overload for these concrete types is only ever called when the types match. This is where the optional extra selection types may be given as the <code>EXTRA_SELECTORS</code> template arguments.</p><pre><code>template <class... EXTRA_SELECTORS>struct make{ template <class RET, class... N_ARY> static void op( std::function<RET(EXTRA_ARGS... extra_args, N_ARY... args)> a_func) { // Wrapper kept as a lambda mapping the internal // function signature to the concrete function signature. op_func_t op( [a_func]( EXTRA_ARGS... extra_args, const typename type_converter_t<N_ARY>::any&... args) -> std::any { // Conversion to concrete argument types. return std::any(a_func(extra_args..., *std::any_cast<N_ARY>(&args)...)); } ); // Registration. auto& ops = get_ops<selector_t, op_func_t>(); ops[op_sel_t::make()] = op; }};</code></pre><h2>Call Me Up, Call My Op</h2><p>We finally reach the function used to dispatch a call. There are three versions of the function. The thing that differentiates them is whether the arguments have already been converted to <code>std::any</code> or <code>std::type_index</code>. The <code>call<>::op</code> function needs to do a few things:</p><ul> <li>Create a selector from the types of its arguments, plus the optional extra selectors.</li> <li>Retrieve the list of available overloads.</li> <li>Lookup the function overload using the selector.</li> <li>Return an empty value if no overload matches the arguments.</li> <li>Call the function if an overload matches the arguments.</li></ul><pre><code>template <class... EXTRA_SELECTORS>struct call{ template <class... N_ARY> static std::any op(EXTRA_ARGS... extra_args, N_ARY... args) { // The available overloads. const auto& ops = get_ops<selector_t, op_func_t>(); // Try to find a matching overload. const auto pos = ops.find(op_sel_t::make()); // Return an empty result if no overload matches. if (pos == ops.end()) return std::any(); // Call the matching overload. return pos->second(extra_args..., args...); }};</code></pre><h2>Wrapping Up</h2><p>This completes the description of the dynamic dispatch design and its implementation. The source code repo contains multiple examples of operations with a complete suite of tests.</p><p>The examples of operations are:</p><ul> <li><code>compare</code>, a binary operation to compare two values.</li> <li><code>convert</code>, a unary operation to convert a value to another type. This is an example of an operation with an extra selector argument, the final type of the conversion.</li> <li><code>is_compatible</code>, a nullary operation that takes two extra selection types to verify if one can be converted to the other.</li> <li><code>size</code>, a unary operation that returns the number of elements in a container, or zero if no overload was found.</li> <li><code>stream</code>, a unary operation to write a value to a text stream. This is an example of an operation with an extra unchanging argument, the destination <code>std::ostream</code>.</li> <li><code>to_text</code>, a unary operation that converts a value to text.</li></ul><p>The whole code base is found in the <code>any_op</code> library that is part of my <a href="https://github.com/pierrebai/dak_utility">dak_utility repo</a>.</p></div>

Artificial Intelligence
5 min read
Six Misconceptions about Artificial Intelligence
<div><h2>Machines learn by themselves</h2><p>That’s the impression you get. The reality is that machines are not yet at the stage where they can make their own decisions about their field of application. And what decisions they do make are grounded in a considerable amount of human work upstream. Experienced specialists still have to formulate the problem, prepare the models, determine the appropriate training data sets, eliminate the potential biases induced by these data, and so on. Then, they have to adjust the software in light of its performance. AI models are still dependent on countless human brain-hours.</p><h2>Machines are objective</h2><p>Nothing could be further from the truth. After all, the design of the hardware and the programming of the software are human creations. In machine learning, objectivity is a function of the neutrality of the datasets that are submitted to the training model. Since cognitive bias is almost inevitable, the trickiest part of preparing the data is to limit this bias as much as possible. Often, a model reproduces a confirmation bias that it has inherited from its human creators. As they say: garbage in, garbage out.</p><h2>AI is the same thing as machine learning</h2><p>While it is true that almost all current applications of AI concern machine learning, the fact is that machine learning, or the idea that machines can learn and adapt through experience, is only one tool of AI. Perhaps one day we will discover new methods of solving problems not suited to machine learning, for example problems for which we do not have large amounts of qualified data. AI encompasses the more general concept whereby machines can perform tasks in an “intelligent” way, i.e. using functions similar to human intelligence. That said, the concept of AI has no commonly accepted definition and its limits are blurred. Perhaps it would be more appropriate to call it “complex information processing” or “cognitive automation”, but that would certainly be less sexy.</p><h2>AI will kill jobs</h2><p>As was the case with the automation and robotization of recent decades, it would be more accurate to say that AI technologies will replace some jobs and transform others. In other words, AI will profoundly change the nature of work, as was the case in previous industrial revolutions, but probably not reduce the overall number of jobs. Just like robotization made it possible to eliminate repetitive manual tasks, AI makes it possible to eliminate repetitive intellectual tasks, freeing up capacity to work in a new and more intelligent way. And just like robotization, AI can be more efficient than any human for certain tasks. Take, for example, an AI-based application for examining lung X-rays that can detect disease much faster and more reliably than radiologists.</p><h2>AI is not useful in my company</h2><p>Are you sure? AI can already improve interactions with customers, analyze data faster, assist in decision-making, generate early warnings of upcoming disruptions, and more. Why deprive yourself of it? It also has a number of useful applications in an industrial environment, for example computer vision/recognition, which allows it to detect a defective part much more efficiently and quickly than a human operator. Ignoring AI is like shunning the benefits of automation, at the cost of putting the company at a competitive disadvantage. AI is nothing more or less than the logical extension of the industrial revolution of automation/robotization.</p><h2>Super-intelligent machines will surpass humans</h2><p>Today’s AI applications are very context-specific, i.e. they respond to highly focused problems. Generalized intelligence like human or natural intelligence, which is capable of tackling any number of different tasks, is not yet on the agenda and belongs to the realm of science fiction. Mind you, <a href="https://en.wikipedia.org/wiki/From_the_Earth_to_the_Moon">back in 1865</a>, moon travel also belonged to the realm of science fiction. While we cannot positively state, at this point, that AI will not surpass humans eventually, we think we can safely say that super-robots will not be able to surpass humans in everything within our lifetimes.</p></div>

Strategy
5 min read
Setting a strategy to pay down technical debt
<div><p>Over the last few months, I have worked on two applications weighed down by heavy technical debt; for one of these applications, we had to resort to a progressive payment strategy, which lends itself well to most <a href="https://www.spiria.com/en/services/purpose-built-development/custom-software-development/">development projects.</a></p><p>At first glance, trying to assess the cost of technical debt can seem complex, and you must keep in mind that minimum effort barely allows you to keep up with the interest, meaning that every new feature is like a new loan added to the principal. To make a dent in the principal, you must assess the current debt and revisit some definitions and practices within the team.</p><p>What exactly amounts to technical debt? Obsolete libraries or APIs. Known bugs. Performance problems caused by the aforementioned. Every shortcut taken to deliver on time, without actually addressing the problem. They all are part of the overall debt. Does sub-optimal architecture constitute technical debt? Probably not, and architectural changes are the stuff of a whole other article. What about gaps in unit testing? Probably not debt as such, but implementation of these tests will probably be part of the good practices that will keep you from adding to your debt.</p><p>Your first step is to dissect the application and to categorize every item of debt in one of three categories. The time required for this analysis will directly correlate with the previous team’s involvement in the project. Once you’ve analyzed the project, you’ll find that each item of debt will fall in one of three main categories:</p><ul> <li>Items of general debt that can be paid off with no direct consequences on functionalities. They are good candidates for their own story (for example: “update React to the latest version”);</li> <li>Items of general debt that repeat across functionalities. They are good candidates for becoming sub-tasks (for example, “transition classes to functional components”);</li> <li>those items that are story specific and will not be duplicated (typically, bugs).</li></ul><p>Before assessing the effort required to pay off each item of debt, you must revisit the concept of completion. When can a story be considered completed, and closed? Before the beginning of QA testing? After QA testing and merging into the main development branch, but before quality control testing? Or after the feature is delivered to users?</p><p>In our case, my team found that the most beneficial approach was to consider stories as completed after the successful completion of QA testing and the merging of our code into our development branch. However, the agreement provided that this branch had to be deliverable at any given time, meaning that no incomplete or problematic merge was ever allowed.</p><p>During the first grooming, we had to implement a methodology to systematically pay down debt. This methodology proved to be quite simple yet thorough and efficient.</p><p>We decided that a maximum of 20% of each sprint would be dedicated to paying down the first category of debt in our list above, i.e. the items of debt that were not likely to create bugs affecting individual functionalities.</p><p>For the second category of debt, or items of general debt that repeated across features, we decided on a phased-in approach. Every time we addressed an existing feature, two sub-tasks would automatically be created for the story, the first for code refactoring and the second for unit tests. In most cases, these two sub-tasks accounted for 15 to 30% of the story points.</p><p>The third category of debt, feature-specific bugs, would be paid down based on the priorities set out in the backlog.</p><p>After three sprints using this approach, the results were conclusive. Each code refactoring allowed us to eliminate many shortcuts, improve modularity and test every line of code, which had a direct impact on ease of maintenance, implementation, reliability and performance. Better yet, the gains were tangible on both sides of the application: for developers and, especially, for users. In the end, the size of the application package was reduced by 70% (from 12MB to 3.5MB), loading time plummeted by 80%, and unit test coverage increased from 2 to 60% (about 380 new tests).</p><p>At the outset, this effort, and its cost, were very much an unknown. In the end, however, the benefits far outweighed the investment, even over the long term. It’s sometimes difficult to provide complete transparency about existing debt and to reveal it to the entire team through the backlog, but if you put in metrics for each objective, the gains will be gratifying for all concerned.</p></div>

Best Practices
5 min read
What are the attributes of a good software development manager?
<div><p>So, what makes a <em>good software development</em> manager? Before we answer that question, let us ask <em>why</em> we need a good software development manager.</p><h2>Why a Good Software Development Manager?</h2><p>If we take as a starting premise that a good software development team begets good software, then the reasons for necessitating a good manager become clearer.</p><p>First, we need a good manager to <b>clear impediments</b> for the team to perform with minimal distractions and obstacles. If the team has to deviate from its focus in order to address side issues, then its performance and efficiency in creating good software must necessarily decrease.</p><p>A second reason for needing a good software development manager is to <b>motivate and engage the team</b> consistently throughout the project lifecycle, in good times and in not-so-good ones. A software development team is (usually) composed of human beings; understanding how each individual functions is essential to guiding the group towards a common goal.</p><p>Finally, a good manager <b>identifies and manages fault lines</b>. Katerina Bezrukova, an assistant professor of group dynamics at the University of Santa Clara, studied Silicon Valley tech companies to determine whether chemistry can be predicted, as well as its importance to success. Fault lines are essentially attributes that lead to group divisions, like age, gender, ethnicity, career motivation, hobbies and interests. These divisions serve to understand group composition effects, i.e. team chemistry. Diversity can cause fault lines, but having the ability to bridge them – to create overlapping groups obviating these divisions – will help in establishing networks to resolve potential conflicts.</p><p>There are many more excellent reasons why a good software development manager is worth their weight in gold, but the three listed above will guide us as we examine the attributes that let you become one.</p><h2>Empathy</h2><p>Good teams do not jell by fluke – it requires a good understanding of each team member and, more importantly, realizing that their worldview and perspective may differ from yours. A good manager will observe and process different behaviors and reactions. However, there is no substitute for <b>active listening</b>. Often, frustrated managers won’t understand why their engagement technique (common goal and shared cultural values, common enemy or reward-system) is not effective. Those managers tend to quickly resort to the “<em>my way or the highway</em>” style of management.</p><p>A lack of empathy for team members can actually create additional fault lines, leading to group divisions within the team.</p><h2>Anticipation</h2><p>Both clients and software developers stress their enormous appreciation of managers that <b>anticipate reactions and outcomes</b>. Such managers have taken the time to analyze how a given outcome may affect a wide range of behaviors, and have a <b>plan to address them</b>. This type of managerial behaviour validates the various personalities and life experiences of team members, instead of trying to mold them into a zombie-like group of workers with predictable inputs and outputs.</p><p>A manager’s ability to anticipate solidifies the underlying feeling that <b>the manager has everyone’s back</b> – the team’s and the client’s – which gives team members the peace of mind that helps them focus on their mission.</p><h2>Communication</h2><p>We would be remiss if we didn’t stress once again the importance of <b>active listening</b> as a key part of communication. A good manager that exhibits good active listening skills will also lead by example, giving team members their cue to also employ active listening with each other whenever conflicts arise. The opposite is true as well: a manager that is a bad listener will cause team members to fight among themselves to get their voices heard.</p><p>Teams work better when mistrust is kept at bay. To avoid breeding mistrust, a good manager must know <b>what</b> message they want to convey, <b>when</b> to deliver it (timing) and <b>how</b> (tone). Achieving the right balance of what, when and how will require the Empathy and Anticipation that helped us achieve an understanding of group dynamics.</p><p>In keeping with the theme of “reading the room”, be aware of the potential negative impact of over-communicating. Sometimes, it is wiser to adapt a “<em>no sudden movements</em>” strategy to avoid inducing panic or exacerbating anxiety within the team. Give your team the time (and trust) to work out the issues.</p><h2>Decision-Making</h2><p>Whether you are a top-down or bottom-up manager, your team looks to you in terms of decision-making and accountability. Courage in decision-making is not about taking risks, but rather about being firm, decisive and accountable for a path taken. While competence and title help support credibility, a lack of courage is an instant credibility saboteur.</p><h2>Competence</h2><p>Finally, competence is still important for a manager. And we do not mean the ability to code or to know your clients’ business and industry better than anyone. Rather, it’s all about being good at the points listed above: competence in Empathy, competence in Anticipation and planning, competence in Communication and competence in Decision-Making.</p><p>Be honest about your level of competence with your team; this gives your team a chance to organically fine-tune its dynamics until everyone reaches their maximum potential competence in each area.</p><p>It is sometimes argued that software developers do not need managers. While it is true that a team can always muddle along, in the absence of management, the challenges that need to be addressed do not simply go away. If anything, they keep developers from focusing on the mission, since they are now responsible for efficient group dynamics and other challenges that arise from working with other human beings. Conversely, bad managers impose their will on the team and create different channels to try to make themselves indispensable to progress.</p><p>Thus, it is to the benefit of the entire team – client and software developers – to have a good manager to guide and shepherd your software development project. It may be difficult to isolate and quantify the value of a good manager’s role; however, the value is clearer once you look at the increase of performance from everyone else in the team.</p></div>

Strategy
5 min read
How should I budget for my software development project?
<div><p>While being alive to the reality of cost is important, it does not necessarily have to come at the expense of the <b>value</b> of your Idea. In this article, we will discuss budget-setting for a software development project that brings your Idea to life, making sure this budget supports you rather than hampering you, so that you can keep on dreaming and being inspired.</p><h2>Homework</h2><p>Yes, I followed up “dreaming” and “being inspired” with “homework”. Indeed, it is important to prepare and educate yourself as best as you can to make informed decisions about your budget.</p><h3>Legwork: Comparables and Lessons Learned</h3><p>Homework can be seen as a two-part whole: the legwork and the mindset. The <b><i>legwork</i></b> includes getting <b>comparables</b> to get into the right ballpark. Ideally, we’d compare apples to apples, but it is not always easy to find the perfect apple. Keep that in mind when comparing “similar projects”, and avoid assuming that one outcome will beget another. In addition, look at <b>how well your company maintains operations with an added ongoing project</b> (software or otherwise), as a software development project is a potential source of disruption to your day-to-day operations. Successful projects rely on good collaboration – which demands time from your staff. Finally, cultivate and be attuned to the <b>lessons learned</b> from your own past projects, and others’.</p><h3>Mindset: Cost Certainty and Value Certainty</h3><p>The other part of your whole is to define your <b><i>mindset</i></b> when it comes to budgets. People often conflate the existence of a budget with a need for <b>cost certainty</b> (not to mention its own impact on time certainty). You should ask yourself if that is indeed what you are looking for, and whether you should rather focus on <b>value certainty</b>. Value is cultivated and refined over time – do you want to position your team to keep looking for value? Being value-driven does not mean having a blank cheque; it simply means that you are ready to re-prioritize and fine-tune the details of your budget as the situation evolves.</p><p>A <a href="https://www.spiria.com/en/services/purpose-built-development/custom-software-development/">software development</a> plan never survives contact. But doing your homework will help you make in-project budget decisions and adjustments based on facts, sound reasoning and objectives, rather than constantly fight fires.</p><h2>Checklist</h2><p>Now, we can identify the elements of a software development project:</p><p><b>Labor, including travel expenses</b>: this includes the labor from your own company or a hired external firm. Don’t forget that you are paying for brainpower as well as coding fingers, and as such, technical advice does cost money and is probably the most valuable part of your project.</p><p><b>Equipment, software licenses</b>: there are many options to choose from, hence the importance of setting your goals and understanding your context accordingly. For example, free open source software may work in some cases but not in others.</p><p><b>Impact on day-to-day operations</b>: your staff may need to take some time away from their daily tasks to collaborate on the ongoing project. Once the project is delivered, there may be costs related to training and initial ramp-up time as the new system is integrated and used in daily operations.</p><p><b>Recurring cost of maintenance</b>: typically, you should budget 10% to 30% of the initial project cost for each subsequent year after go-live.</p><ul> <li>Direct vs indirect costs</li> <li>Fixed vs variable costs</li> <li>One-time vs recurring costs</li> <li>Needs vs wants</li></ul><p>Of course, building a checklist is one thing, and assigning realistic amounts to each item is another. This is where the legwork pays off – the comparables will help you budget an initial amount, and the lessons learned will help you with contingency and risk assessment. Finally, it is important that there be a company-wide buy-in to the budget and the value that the project will bring. A common and united front will be important when you start collecting quotes.</p><h2>Getting Ready for Quotes</h2><p>Now that you’ve internally set a preliminary budget for your software development project – for your Idea! – you can start collecting quotes from firms and vendors.</p><p>You will get a range of quotes, and perhaps a few surprises along the way. Take time to study those surprises; some may reflect real insights that you may have overlooked or underestimated, while others may be upsells or scare tactics. Alternatively, they may mean that the firms and vendors have misunderstood your mindset, or are not quoting based on your needs but rather theirs. Take this as an opportunity to clarify your own mindset.</p><p>Finally, remember that the quotes you are receiving make up just a portion of your overall budget, and not its entirety – that’s why we calculated the indirect costs and impact of your project on your day-to-day operations.</p><h2>Keep Dreaming About Your Idea</h2><p>We started this article talking about your Idea. That initial energy and passion that you felt at that moment needs to be bottled and shared during the entire project. The grind of managing a budget may cloud your original vision, which can lead to unconscious distortion.</p><p>There is a happy medium between, on the one hand, balancing your budget – even an immutable one – and on the other hand, helping your Idea be born; your preparedness and leadership will help you walk that razor’s edge.</p><p>And here at <a href="https://www.spiria.com">Spiria</a>, we are here to help you. 😊</p></div>

Artificial Intelligence
5 min read
Data Preparation for Machine Learning
<p>Data preparation consists in data collection, wrangling, and finally enfranchisement, if required and when possible.</p> <h2>Data collection</h2> <p>First, gather the data you will need for Machine Learning. Make sure you collect them in consolidated form, so that they are all contained within a single table (Flat Table).</p> <p>You can do this with whatever tool you are comfortable using, for example:</p> <ul> <li>Relational database tools (SQL)</li> <li>Jupiter notebook</li> <li>Excel</li> <li>Azure ML</li> <li>R Studio</li> </ul> <h2>Data wrangling</h2> <p>This involves preparing the data to make them usable by Machine Learning algorithms. <em>(Data Cleansing, Data Decomposition, Data Aggregation, Data Shaping and Transformation, Data Scaling.)</em></p> <h3><em>Data Cleansing</em></h3> <p>Find all the “Null” values, missing values and duplicate data.</p> <p>Examples of missing values:</p> <ul> <li>blanks</li> <li>NULL</li> <li>?</li> <li>N/A, NaN, NA</li> <li>9999999</li> <li>Unknown</li> </ul> <p> </p> <div> <table cellspacing="0" cellpadding="0"> <tbody> <tr> <th>#Row</th> <th>Title</th> <th>Type</th> <th>Format</th> <th>Price</th> <th>Pages</th> <th>NumberSales</th> </tr> <tr> <td>1</td> <td>Kids learning book</td> <td>Series – Learning – Kids -</td> <td>Big</td> <td>16</td> <td>100</td> <td>10</td> </tr> <tr> <td>2</td> <td>Guts</td> <td>One Book – Story - Kids</td> <td>Big</td> <td> </td> <td> </td> <td>3</td> </tr> <tr> <td>3</td> <td>Writing book</td> <td>Adults – learning- Series</td> <td> </td> <td>10</td> <td>120</td> <td>8</td> </tr> <tr> <td>5</td> <td>Dictation</td> <td>Series - Teenagers</td> <td>Small</td> <td>13</td> <td>85</td> <td>22</td> </tr> </tbody> </table> </div> <p><code>data_frame</code> below is our Pandas dataset:</p> <pre><code>#Count the number of missing values in each row in Pandas dataframedata_frame.isnull().sum()</code></pre> <pre><code>#Row 0Title 0Type 0Price 1Format 1Pages 1NumberSales 0</code></pre> <p>If certain rows are missing data in many important columns, we may consider removing these rows, using <code>DELETE</code> query in SQL or <code>pandas.drop()</code> in Python.</p> <p>Sometimes, the missing value can be replaced by either zero, the main common value, or the average value, depending on the column values and type. You can do this by using <code>UPDATE</code> query in SQL or <code>pandas.fillna()</code> in Python.</p> <p>In the following code, we have replaced the missing values of “Pages” with the mean:</p> <pre><code>global_mean = data_frame.mean()data_frame['Pages'] = data_frame['Pages'].fillna(global_mean['Pages'])data_frame.isnull().sum()</code></pre> <pre><code>#Row 0Title 0Type 0Price 1Format 1Pages 0NumberSales 0</code></pre> <p>And the missing “Format” values with the common value:</p> <pre><code>#Counts of unique valuesdata_frame["Format"].value_counts()</code></pre> <pre><code>Big 2Small 1Name: Format, dtype: int64</code></pre> <p>As “Big” is the most common value in this case, we have replaced all the missing values by “Big”.</p> <pre><code># Replace missing "Format" value with the most common value “Big”data_frame["Format"] = data_frame['Format'].fillna("Big")data_frame["Format"].value_counts()</code></pre> <pre><code>Big 3Small 1</code></pre> <p>The resulting data_frame is as follows:</p> <div> <table cellspacing="0" cellpadding="0"> <tbody> <tr> <th>#Row</th> <th>Title</th> <th>Type</th> <th>Format</th> <th>Price</th> <th>Pages</th> <th>NumberSales</th> </tr> <tr> <td>1</td> <td>Kids learning book</td> <td>Series – Learning – Kids -</td> <td>Big</td> <td>16</td> <td>100</td> <td>10</td> </tr> <tr> <td>2</td> <td>Guts</td> <td>One Book – Story - Kids</td> <td>Big</td> <td>13</td> <td>100</td> <td>3</td> </tr> <tr> <td>3</td> <td>Writing book</td> <td>Adults – learning- Series</td> <td>Big</td> <td>10</td> <td>120</td> <td>8</td> </tr> <tr> <td>4</td> <td>Dictation</td> <td>Series - Teenagers</td> <td>Small</td> <td>13</td> <td>85</td> <td>22</td> </tr> </tbody> </table> </div> <p>Make sure you have no duplicates. Delete duplicated rows using <code>DELETE</code> in SQL or <code>pandas.drop()</code> in Python.</p> <h3><em>Data Decomposition</em></h3> <p>If some of your text columns contain several items of information, split them up in as many dedicated columns as necessary. If some columns represent categories, convert them into dedicated category columns.</p> <p>In our example, the “Type” column contains more than one item of information, which can clearly be split into 3 columns, as shown below (Style, Kind and Readers). Then go through the same process as above for any missing values.</p> <div> <table cellspacing="0" cellpadding="0"> <tbody> <tr> <th>#Row</th> <th>Title</th> <th>Style</th> <th>Kind</th> <th>Readers</th> <th>Format</th> <th>Price</th> <th>Pages</th> <th>SalesMonth</th> <th>SalesYear</th> <th>NumberSales</th> </tr> <tr> <td>1</td> <td>Kids learning book</td> <td>Series</td> <td>Learning</td> <td>Kids</td> <td>Big</td> <td>16</td> <td>100</td> <td>11</td> <td>2019</td> <td>10</td> </tr> <tr> <td>2</td> <td>Guts</td> <td>One Book</td> <td>Story</td> <td>Kids</td> <td>Big</td> <td>13</td> <td>100</td> <td>12</td> <td>2019</td> <td>3</td> </tr> <tr> <td>3</td> <td>Writing book</td> <td>Series</td> <td>learning</td> <td>Adults</td> <td>Big</td> <td>10</td> <td>120</td> <td>10</td> <td>2019</td> <td>8</td> </tr> <tr> <td>4</td> <td>Writing book</td> <td>Series</td> <td>learning</td> <td>Adults</td> <td>Big</td> <td>10</td> <td>120</td> <td>11</td> <td>2019</td> <td>13</td> </tr> <tr> <td>5</td> <td>Dictation</td> <td>Series</td> <td>learning</td> <td>Teenagers</td> <td>Small</td> <td>13</td> <td>85</td> <td>9</td> <td>2019</td> <td>17</td> </tr> <tr> <td>6</td> <td>Dictation</td> <td>Series</td> <td>learning</td> <td>Teenagers</td> <td>Small</td> <td>13</td> <td>85</td> <td>10</td> <td>2019</td> <td>22</td> </tr> </tbody> </table> </div> <h3><em>Data Aggregation</em></h3> <p>This involves grouping data together, as appropriate.</p> <p>In our example, “Number of sales” is actually an aggregation of data. Initially, the database showed transactional rows, which we aggregated to obtain the number of books sold per month.</p> <h3><em>Data Shaping and Transformation</em></h3> <p>This involves converting categorical data to numerical data, since algorithms can only use numerical values.</p> <p>“Style”, “Kind”, “Readers” and “Format” are clearly categorical data. Below are two ways to transform them into numerical data.</p> <p><strong>1. <em>Convert all the categorical values to numerical values:</em></strong> Replace all unique values by sequential numbers.</p> <p>Example of how to do this in Python:</p> <pre><code>cleanup_nums = {"Format": {"Big": 1, "Small": 2}, "Style": {"Serie": 1, "One Book": 2}, "Kind": {"Learning": 1, "Story": 2}, "Readers": {"Adults": 1, "Teenagers": 2, "Kids": 3} }data_frame.replace(cleanup_nums, inplace=True)data_frame.head()</code></pre> <p>Result:</p> <div> <table cellspacing="0" cellpadding="0"> <tbody> <tr> <th>#Row</th> <th>Title</th> <th>Style</th> <th>Kind</th> <th>Readers</th> <th>Format</th> <th>Price</th> <th>Pages</th> <th>SalesMonth</th> <th>SalesYear</th> <th>NumberSales</th> </tr> <tr> <td>1</td> <td>Kids learning book</td> <td>1</td> <td>1</td> <td>3</td> <td>1</td> <td>16 $</td> <td>100</td> <td>11</td> <td>2019</td> <td>10</td> </tr> <tr> <td>2</td> <td>Guts</td> <td>2</td> <td>2</td> <td>3</td> <td>1</td> <td>13 $</td> <td>100</td> <td>12</td> <td>2019</td> <td>3</td> </tr> <tr> <td>3</td> <td>Writing book</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>10 $</td> <td>120</td> <td>10</td> <td>2019</td> <td>8</td> </tr> <tr> <td>3</td> <td>Writing book</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>10 $</td> <td>120</td> <td>11</td> <td>2019</td> <td>13</td> </tr> <tr> <td>4</td> <td>Dictation</td> <td>1</td> <td>1</td> <td>2</td> <td>2</td> <td>13</td> <td>85</td> <td>9</td> <td>2019</td> <td>17</td> </tr> <tr> <td>4</td> <td>Dictation</td> <td>1</td> <td>1</td> <td>2</td> <td>2</td> <td>13</td> <td>85</td> <td>10</td> <td>2019</td> <td>22</td> </tr> </tbody> </table> </div> <p><picture><source srcset="https://mirror.spiria.com/site/assets/files/5792/output_1.400x0.webp" type="image/webp" media="(max-width: 599px)" /><source srcset="https://mirror.spiria.com/site/assets/files/5792/output_1.760x0.webp" type="image/webp" media="(max-width: 999px)" /><source srcset="https://mirror.spiria.com/site/assets/files/5792/output_1.1039x0.webp" type="image/webp" media="(min-width: 1000px)" /><img src="https://mirror.spiria.com/site/assets/files/5792/output_1.png" alt="decorative" /></picture></p> <p><strong>2. <em>Dummies method:</em></strong> This consists in creating a separate column for each single categorical value of a categorical column. As the value of each column is binary (0/1), you can only have one “1” in the newly-generated columns.</p> <p>How to do this in Python:</p> <pre><code># Convert category to dummydata_frame = pd.get_dummies(data_frame, columns=["Format"])data_frame = pd.get_dummies(data_frame, columns=["Style"])data_frame = pd.get_dummies(data_frame, columns=["Kind"])data_frame = pd.get_dummies(data_frame, columns=["Readers"])data_frame.head()</code></pre> <p>You will notice below that “Format” generated 2 columns (“Format_Big” and “Format_Small”), because the “Format” column had 2 single values (“Big” and “Small”). However, “Readers” generated 3 different columns, because it had 3 different values (“Adults”, “Teenagers” and “Kids”).</p> <div> <table cellspacing="0" cellpadding="0"> <tbody> <tr> <th>Id</th> <th>Title</th> <th>Style_Series</th> <th>Style_OneBook</th> <th>Kind_Learning</th> <th>Kind_Story</th> <th>Readers_Adults</th> <th>Readers_Teenagers</th> <th>Readers_Kids</th> <th>Format_Big</th> <th>Format_Small</th> <th>Price</th> <th>Pages</th> <th>SalesMonth</th> <th>SalesYear</th> <th>NumberSales</th> </tr> <tr> <td>1</td> <td>Kids learning book</td> <td>1</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> <td>0</td> <td>1</td> <td>1</td> <td>0</td> <td>16</td> <td>100</td> <td>11</td> <td>2019</td> <td>10</td> </tr> <tr> <td>2</td> <td>Guts</td> <td>0</td> <td>1</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> <td>1</td> <td>1</td> <td>0</td> <td>13</td> <td>100</td> <td>12</td> <td>2019</td> <td>3</td> </tr> <tr> <td>3</td> <td>Writing book</td> <td>1</td> <td>0</td> <td>1</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> <td>1</td> <td>0</td> <td>10</td> <td>120</td> <td>10</td> <td>2019</td> <td>8</td> </tr> <tr> <td>3</td> <td>Writing book</td> <td>1</td> <td>0</td> <td>1</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> <td>1</td> <td>0</td> <td>10</td> <td>120</td> <td>11</td> <td>2019</td> <td>8</td> </tr> <tr> <td>4</td> <td>Dictation</td> <td>1</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> <td>1</td> <td>13</td> <td>85</td> <td>9</td> <td>2019</td> <td>22</td> </tr> <tr> <td>4</td> <td>Dictation</td> <td>1</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> <td>1</td> <td>13</td> <td>85</td> <td>10</td> <td>2019</td> <td>22</td> </tr> </tbody> </table> </div> <p><picture><source srcset="https://mirror.spiria.com/site/assets/files/5792/output_2.400x0.webp" type="image/webp" media="(max-width: 599px)" /><source srcset="https://mirror.spiria.com/site/assets/files/5792/output_2.760x0.webp" type="image/webp" media="(max-width: 999px)" /><source srcset="https://mirror.spiria.com/site/assets/files/5792/output_2.1039x0.webp" type="image/webp" media="(min-width: 1000px)" /><img src="https://mirror.spiria.com/site/assets/files/5792/output_2.png" alt="decorative" /></picture></p> <p>*The “Id” and “Title” columns will not be used during our ML process.</p> <p>The benefit of the dummies method is that all values have the same weight. However, as it adds as many new columns as the number of single categories in each existing column, be cautious of using this method if you already have many columns to consider in the ML process.</p> <p>On the other hand, if you decide to replace your categorical values by numerical ones, it may give more weight to certain categories whose number is higher. For “Readers”, for example, category 3 will impact the result 3 times, as opposed to category 1. You can imagine what can happen when you have many different values in a categorical column.</p> <h3><em>Data Scaling</em></h3> <p>This process will yield numerical data on one common scale, if it is not already the case. It is required when there is a large variation between features ranges. Data Scaling does not apply to <em>label</em> and <em>categorical</em> columns.</p> <p>You have to scale again to have the same weight to all <em>features</em>.</p> <p>In our example, we need to scale the “Price” and “Pages” columns:</p> <ol> <li>Price [10, 16]</li> <li>Pages [85, 120]</li> </ol> <p>These two columns must be scaled, otherwise the “Pages” column will have more weight in the result than the “Price” column.</p> <p>While there are many methods of scaling, for the purposes of our example, we used the <code>MinMaxScaler</code> from 0 to 1.</p> <pre><code>#scale the columnsscaler = MinMaxScaler()rescaledX = scaler.fit_transform(X[:,0:2])#put the scaled columns in dataframecolnames = [ 'Price', 'Pages']df_scaled = pd.DataFrame(rescaledX, columns=colnames)# Replace the original columns with the new scaleddata_frame_scalled = data_framedata_frame_scalled[colnames] = df_scaled[colnames]data_frame_scalled.head()</code></pre> <p>The result is the following:</p> <div> <table cellspacing="0" cellpadding="0"> <tbody> <tr> <th>Id</th> <th>Title</th> <th>Style_1</th> <th>Style_2</th> <th>Kind_1</th> <th>Kind_2</th> <th>Readers_1</th> <th>Readers_2</th> <th>Readers_3</th> <th>Format_1</th> <th>Format_2</th> <th>Price</th> <th>Pages</th> <th>NumberSales</th> </tr> <tr> <td>1</td> <td>Kids learning book</td> <td>1</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> <td>0</td> <td>1</td> <td>1</td> <td>0</td> <td>1</td> <td>0.42857143</td> <td>10</td> </tr> <tr> <td>2</td> <td>Guts</td> <td>0</td> <td>1</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> <td>1</td> <td>1</td> <td>0</td> <td>0.5</td> <td>0.42857143</td> <td>3</td> </tr> <tr> <td>3</td> <td>Writing book</td> <td>1</td> <td>0</td> <td>1</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> <td>1</td> <td>8</td> </tr> <tr> <td>4</td> <td>Dictation</td> <td>1</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> <td>1</td> <td>0</td> <td>0</td> <td>1</td> <td>0.5</td> <td>0</td> <td>22</td> </tr> </tbody> </table> </div> <p><picture><source srcset="https://mirror.spiria.com/site/assets/files/5792/output_3.400x0.webp" type="image/webp" media="(max-width: 599px)" /><source srcset="https://mirror.spiria.com/site/assets/files/5792/output_3.760x0.webp" type="image/webp" media="(max-width: 999px)" /><source srcset="https://mirror.spiria.com/site/assets/files/5792/output_3.1039x0.webp" type="image/webp" media="(min-width: 1000px)" /><img src="https://mirror.spiria.com/site/assets/files/5792/output_3.png" alt="decorative" /></picture></p> <p>As stated, there are many other scaling methods; how and when to use each one will be the subject of a future article.</p>

Custom Development
5 min read
MVPs: Your most valuable plan to hit the market swiftly!
<div><p>A “Minimum Viable Product”, or MVP, is basically a pared-down version of a software with just enough features to meet the needs of early adopters while obtaining useful feedback for future iterations of the product. Who determines what features go into the MVP, or not? The answer is “everyone”: the creator of course, the end users especially, as well as the developer, who is best positioned to balance out fundamental purpose, on the one hand, and technical complexity, on the other hand.</p><p>The idea of the MVP is that you can always release future iterations of the product (versions 2, 3, etc.), incrementally adding features at every sprint. The iteration and incremental improvement model of development has stood the test of time for all manner of products; in short, it is the best way to release your creation as quickly as possible.</p><p>The very nature of <a href="https://www.spiria.com/en/services/purpose-built-development/custom-software-development/">software development</a> is beautifully suited to this model, as long as you tweak your methodology. Your goal is to obtain comments and opinions on your product as early as possible to gain a better understanding of the actual needs of end users, which, in turn, will help you decide what features to include in your product (or not). In other words, this feedback will be your product “wish list” for versions 2, 3, and later.</p><p>To get to the best possible MVP, you’ll need to use models, proofs of concept, “spikes” and finally a working <a href="https://www.spiria.com/en/services/human-centered-design/product-prototyping/">prototype</a>, developed jointly with designers, <a href="https://www.spiria.com/en/services/human-centered-design/user-experience-design/">UX experts</a> and developers. A high-frequency feedback loop (for example, at each sprint demo, or at each important milestone) will ensure success.</p><p>As is always the case in software development, some features will be more complex to achieve or technically uncertain. For each feature, ask yourself if said feature is essential for initial launch — and get confirmation! —, in order to avoid needlessly delaying the release of your product.</p><p>First impressions are crucial, as they say, and this goes for new software products too. This is why you should have at least one feature that will differentiate you from the competition – for example, a feature that is unique or especially effective. Best practice has you selecting just a couple of features, saving the others for future versions. The secret to MVPs is to provide enough goodies to engage users, while always saving something for later. This balance can only be achieved by a disciplined team, supported by software experts that can work the feasibility/complexity equation, weigh the different features and make technical recommendations.</p><p>Generally, an MVP includes every aspect of the final product, but in a “lite” version. For example, it will already have a forum or a blog, but at first, it won’t be searchable. Eventually, a complete feature, while not essential, will provide an exciting new development for users in later versions.</p></div>

Dev's Corner
5 min read
Solving the problems with the “futures” in C++
<div><p>Many modern programming languages allow you to achieve this acceleration with the help of async code and future values. The basic principle behind async and futures is that called functions are run on another thread, and the return values are converted to what is called a future-value. Such future-values don’t hold a real value until the asynchronous function ends. The function runs concurrently and, when it eventually returns with a value, the future-value variable is updated behind the scene to hold that return value. No need for explicit mutex or messaging: all the synchronization between the initial call and the background thread is performed behind the scenes. When the initial thread accesses the future-value, it is automatically paused until the value is ready.</p><p>The main benefit of this design is that making a given function run asynchronously is ridiculously easy. Of course, it is still up to the programmer to make sure the function can actually be run on another thread safely, and that there are no data races; async and futures solely provide an easy way to spawn threads and receive a result.</p><h2>The Non-Issues</h2><p>My goal here is not to discuss how to design race-free algorithms, nor how to design data to make it easy to run-multi-threaded algorithms. I’ll merely mention that a possible way to achieve that requires avoiding all global variables and passing all data to the asynchronous function by value. This way, nothing is shared between threads and thus no race can occur.</p><h2>The Issues</h2><p>While async and futures make it easy to turn a function into a thread, this very simplicity is what causes problems. Simplicity means complete lack of control. You have no control over:</p><ul> <li>how many asynchronous functions are being run,</li> <li>how many threads are being created to run those functions,</li> <li>how many threads are waiting for results.</li></ul><p>This requires a fine balancing act between maximizing processor usage versus maintaining some control. On the one hand, you want as many asynchronous functions to run as possible, to ensure the processor is fully occupied; on the other hand, you don’t want to overload the processor with too many threads.</p><h2>The Solution</h2><p>The best solution to this problem is to introduce some complexity. The additional complexity allows you to regain control over all of the issues listed above.</p><h3>First Step: thread pool</h3><p>The first step is to forego async and futures for processor usage maximization. They can still be used for starting the core of parallel algorithms, but not to create multiple threads. Instead, it is best to use a thread pool.</p><p>A thread pool gives you control the number of threads created to run parallel algorithms. You can create exactly as many threads as there are cores in the processor, ensuring the exact maximum throughput without overloading the processor.</p><h3>Second Step: Work Queue</h3><p>While the thread pool controls how many threads are used, it does not control how functions are run by those threads. This is the job of the work queue. Asynchronous functions are added to the queue, and the thread pool takes functions from this queue to execute them and produce results.</p><h3>Third Step: Results</h3><p>While the work queue takes care of the input of the parallel algorithms, we need another function to handle the wait for results. While a classic solution is to use a result queue, we have a better option: futures! Synchronizing the producer of a result and the consumer of that result is exactly what futures are for. The main difference here is that they are created by the thread pool.</p><h3>Fourth Step: Thread Stealing</h3><p>One problem with this design, as it stands, is that if the parallel algorithm submits sub-algorithms to the work queue and waits for their results, we could run out of threads! Each thread could be waiting for results to be produced while no threads are available to produce these results.</p><p>The solution to this is the concept of thread stealing while waiting for a result. Basically, you create a function that tells the thread to execute work from the work queue while waiting for its own result. We no longer directly access the values produced by the futures returned by the work queue. That would block the thread. Instead, we give the future-value back to the work queue, which can execute work items while waiting for the future to become ready.</p><h2>Concrete Code Example</h2><p>I’ve implemented such a scheme multiple times in the past. I’ve re-implemented it recently in an open-source application, written in C++. The application is called <i>Tantrix Solver</i> and it solves Tantrix puzzles. The application code is available on GitHub and contains multiple git branches:</p><ul> <li>One branch shows an example using pure async and futures.</li> <li>Another branch shows the same algorithm using the suggested design.</li></ul><p>The git repo on GitHub is available <a href="https://github.com/pierrebai/Tantrix">here</a>.</p><h3>Pure Async and Futures</h3><p>The git branch containing the pure async and futures code design is called “thread-by-futures”.</p><p>The code design in this branch is simple. After all, that’s the selling point of async and futures. It uses the C++ <code>std::async</code> function with the <code>std::launch::async</code> mode to create threads. However, the problems we mentioned materialize as predicted, with an uncontrolled number of threads. A simple Tantrix puzzle can create a couple of dozen threads, which is probably too many, but still manageable. Complex Tantrix puzzles, on the other hand, can create many <b>hundreds</b> of threads, which can badly bog down most computers.</p><h3>Thread Pool and Work Queue</h3><p>The git branch containing the thread pool and work queue code design is called “thread-pool”. I will describe the code design more thoroughly, as it is more complex, although I’ve tried to keep it as simple as possible.</p><h3>Code Design: The Easy Bits</h3><p>In this section, I will present the more straightforward elements of the design.</p><p>The first part of the design is the thread pool class. You only need to give it a provider of work and the number of threads to create:</p><pre><code style="white-space: pre;"> // A pool of threads of execution. struct thread_pool_t { // Create a thread pool with a given number of threads // that will take its work from the given work provider. thread_pool_t(work_provider_t& a_work_provider, size_t a_thread_count = 0); // Wait for all threads to end. ~thread_pool_t(); private: // The internal function that execute queued functions in a loop. static void execution_loop(thread_pool_t* self); };</code></pre><p>The work provider tells the threads what to do, by causing algorithms to stop or execute with a wait-and-execute function that entirely encapsulates executing one work item or waiting for an item to be executed. We will see how this is done below, with a concrete implementation; but for now, here is the design of the provider:</p><pre><code style="white-space: pre;"> // The provider of work for the pool. struct work_provider_t { // Request that the threads stop. virtual void stop() = 0; // Check if stop was requested. virtual bool is_stopped() const = 0; // The wait-or-execute implementation, called in a loop // by the threads in the thread =s pool. virtual void wait_or_execute() = 0; };</code></pre><p>These two previous classes are hidden inside the work queue, to the point that they can actually be completely ignored by the users of the design. That’s why we won’t be discussing them further.</p><h3>Code Design: The Common Bits</h3><p>The work queue is the more complex piece. Its implementation is templated to make it easy to use for a given algorithm that produces a specific type of results.</p><p>Since this is the central part of the design, I will show it in detail, including its implementation details. I will divide the class presentation into multiple parts to make it easier to understand.</p><p>The first part of the design is the template parameters:</p><pre><code style="white-space: pre;"> template <class WORK_ITEM, class RESULT> struct threaded_work_t : work_provider_t { using result_t = typename RESULT; using work_item_t = typename WORK_ITEM; using function_t = typename std::function<result_t(work_item_t, size_t)>;</code></pre><p>The <code>work_item_t</code> (WORK_ITEM) is the input data of the algorithm. The <code>result_t</code> (RESULT) is the output of the algorithm. The <code>function_t</code> is the actual algorithm. This allows us to support a <i>family</i> of algorithms with the same input and output. When a work item is submitted, the caller also provides the function to run that conform to this family.</p><p>The second part of the design of the work queue encompasses all the internal implementation data types and member variables. Here they are:</p><pre><code style="white-space: pre;"> using task_t = std::packaged_task<result_t(work_item_t, size_t)>; // How the function, work item and recursion depth is kept internally. struct work_t { task_t task; work_item_t item; }; std::mutex my_mutex; std::condition_variable my_cond; std::atomic<bool> my_stop = false; std::vector<work_t> my_work_items; const size_t my_max_recursion; // Note: the thread pool must be the last variable so that it gets // destroyed first while the mutex, etc are still valid. thread_pool_t my_thread_pool;</code></pre><p>The <code>task_t</code> type holds the algorithm function in a C++ type that can call it while producing a C++ <code>std::future</code>. This is how futures are created. The work_t type is the unit of work that can executed by a thread.</p><p>The first two member variables in the work queue are the mutex and condition variable, both used to protect the data shared between the threads and the caller.</p><p>The atomic <code>my_stop</code> variable is used to signal that all execution should stop (surprise!) The vector of <code>work_t</code> holds the unit of work to be executed. It is the concrete work queue. The max recursion is an implementation detail used to avoid deep stack recursion due to thread stealing. This will be explained in more detail later. The thread pool is where the threads of execution are held, obviously.</p><p>The third part of the design includes the creation of the work queue and the implementation of the <code>work_provider_t</code> interface. This is all straightforward. We create the internal thread pool with the exact number of cores in the processor. We also pass the work queue itself as the work provider of the thread pool.</p><pre><code style="white-space: pre;"> // Create a threaded work using the given thread pool. threaded_work_t(size_t a_max_recursion = 3) : my_max_recursion(a_max_recursion), my_thread_pool(*this, std::thread::hardware_concurrency()) {} ~threaded_work_t() { stop(); } // Stop all waiters. void stop() override { my_stop = true; my_cond.notify_all(); } // Check if it is stopped. bool is_stopped() const override { return my_stop; } // Wait for something to execute or execute something already in queue. void wait_or_execute() override { std::unique_lock lock(my_mutex); return internal_wait_or_execute(lock, 0); }</code></pre><p>The destructor and stop function implementation merely use the stop flag and condition variable to signal all the threads to stop. The wait-or-execute implementation... is deferred to an internal function that will be described in the next section setting out the more complex details.</p><h3>Code Design: Hard Bits</h3><p>In this section we finally get to the heart of the design, to the more complex implementation details.</p><p>First, let’s look at the function to wait for a given result. This part is still quite simple: as long as the awaited future value is not ready, we keep looking for new results or for new work to execute. This is when we do work for other queued algorithms, instead of snoozing and losing a thread. If the whole threaded work is stopped, we exit promptly with an empty result.</p><pre><code style="white-space: pre;"> // Wait for a particular result, execute work while waiting. result_t wait_for(std::future<result_t>& a_token, size_t a_recusion_depth) { while (!is_stopped()) { std::unique_lock lock(my_mutex); if (a_token.wait_for(std::chrono::seconds(0)) == std::future_status::ready) return a_token.get(); internal_wait_or_execute(lock, a_recusion_depth); } return {}; }</code></pre><p>Second, let’s look at the function that really executes the unit of work. When there is nothing to execute, it does nothing. On the other hand, when there <i>is</i> at least one unit of work queued, it executes its function, which will produce a new result.</p><pre><code style="white-space: pre;"> private: // Wait for something to execute or execute something already in queue. void internal_wait_or_execute(std::unique_lock<std::mutex>& a_lock, size_t a_recursion_depth) { if (my_stop) return; if (my_work_items.size() <= 0) { my_cond.wait(a_lock); return; } work_t work = std::move(my_work_items.back()); my_work_items.pop_back(); a_lock.unlock(); work.task(work.item, a_recursion_depth + 1); my_cond.notify_all(); }</code></pre><p>The only subtle thing going on is that if the function is left waiting, it returns immediately instead of trying to execute some work. There is a good reason for returning immediately: the awakening can be due to either a result becoming available or a unit of work being added. Since we don’t know what the case may be, and since the caller might be interested in new results, we return to the caller so it can check. Maybe the future value it was waiting for is ready!</p><p>Finally, here is the function to submit work for execution:</p><pre><code style="white-space: pre;"> // Queue the the given function and work item to be executed in a thread. std::future<result_t> add_work(work_item_t a_work_item, size_t a_recusion_depth, function_t a_function) { if (my_stop) return {}; // Only queue the work item if we've recursed into the threaded work only a few times. // Otherwise, we can end-up with too-deep stack recursion and crash. if (a_recusion_depth < my_max_recursion) { // Shallow: queue the function to be called by any thread. work_t work; work.task = std::move(task_t(a_function)); work.item = std::move(a_work_item); auto result = work.task.get_future(); { std::unique_lock lock(my_mutex); my_work_items.emplace_back(std::move(work)); } my_cond.notify_all(); return result; } else { // Too deep: call the function directly instead. std::promise<result_t> result; result.set_value(a_function(a_work_item, a_recusion_depth + 1)); return result.get_future(); } }</code></pre><p>The main unexpected thing to notice is the check of the recursion depth. The subtle problem this seeks to avoid concerns the implementation of the function <code>wait_for()</code> and <code>wait_or_execute()</code>. Since waiting can cause another unit of work to be executed, and that unit of work could also end up waiting, in turn executing another unit... this could snowball into very deep recursion.</p><p>Unfortunately, we cannot refuse to execute work because it could cause all threads to stop executing work if they get too deep. The system would cease to do any work and come to a standstill! So, instead, when the maximum recursion depth is reached within a thread, any work queued by this thread is executed immediately.</p><p>While this seems equivalent to queuing the work item, it is not. You see, the amount of work required to evaluate one branch of an algorithm is limited. In contrast, the number of units of work that can be in the queue due to <i>all</i> the branches of the algorithm can be extremely large. So we can safely assume that the algorithm was designed so that one branch will not recurse so deeply that it leads to a crash. We cannot assume the same thing about the total of all the work items waiting in the queue.</p><p>That is why it’s also a good idea to check the recursion depth in the algorithm itself and not even queue these work items, once the recursion depth is deep. Instead, it should call their function directly in the algorithm, to make it all more efficient.</p><p>Aside from this subtlety, the rest of the code simply queues the work unit and wakes up any thread that was waiting to execute work.</p><h2>Conclusion</h2><p>As shown, this implementation of a work queue replaces async and future with a thread pool. The caller only needs two functions: <code>add_work()</code> and <code>wait_for()</code>. This is still a simple interface to use, but internally, it does give additional control over multi-threading to avoid the pitfalls of async and futures.</p><p>I hope that one day, the C++ standard will come with a built-in design for work queues and thread pools, so that we don’t have to roll them out by hand. In the meantime, feel free to reuse my design.</p></div>

Dev's Corner
5 min read
Multithread Wrap
<div><h2>Duplication</h2><p>The first approach is the simplest. Just duplicate the data for each thread. For this to work, the data has to meet a few criteria:</p><ul> <li>be easy to identify,</li> <li>have no hidden parts,</li> <li>be easy to duplicate,</li> <li>have no essential requirements to be shared at all times.</li></ul><p>If the data meets all these criteria, then duplication is the fastest and safest option. Usually, data that can be used this way is essentially a group of values, like a pure structure in C++, containing unchanging simple values.</p><h2>Wrapping</h2><p>If your data doesn’t meet the duplication criteria, the second approach of wrapping the data can be used. A common case is when you are given an interface that would need to be shared among multiple threads. Here are the steps to create multithread wrapping:</p><ul> <li>Identify the interface that needs to be isolated.</li> <li>Write a thin multi-thread protector to the interface.</li> <li>Write a thin per-thread implementation of the interface.</li></ul><p>To illustrate the technique, I will show you an example of a multi-thread wrapping I have recently done in C++. The code was part of the <i>Tantrix Solver</i> application I wrote. The particular item I needed to convert to multithreaded use was the progress report interface.</p><p>The code for that application is available <a href="https://github.com/pierrebai/Tantrix">on GitHub</a>.</p><h3>Identify the Interface</h3><p>The first step is to fully identify what will be used by the threads. This may require some refactoring if it is a disparate group of items. In the progress example, it was an interface called <code>progress_t</code>. Note that it only has one virtual function that really needs to be made thread-safe: <code>update_progress()</code>.</p><pre><code> // Report progress of work. // // Not thread safe. Wrap in a multi_thread_progress_t if needed. struct progress_t { // Create a progress reporter. progress_t() = default; // Force to report the progress tally. void flush_progress(); // Clear the progress. void clear_progress(); // Update the progress with an additional count. void progress(size_t a_done_count); size_t total_count_so_far() const; protected: // Update the total progress so far to the actual implementation. virtual void update_progress(size_t a_total_count_so_far) = 0; };</code></pre><h3>Multithread Protector</h3><p>The second step is to create a multi-thread protector. The design of all protector is always the same:</p><ul> <li>Do <b>not</b> implement the interface to be protected.</li> <li>Keep custody of the original non-thread-safe interface implementation.</li> <li>Provide multi-thread protection, usually with a mutex.</li> <li>Provide protected access to the per-thread implementation.</li></ul><p>The reason not to implement the desired interface is that the multi-thread protector is not meant to be used directly. If it doesn’t have the interface, it can’t be used accidentally as the interface.</p><p>Its implementation will still mimic the interface very closely. The difference is that each corresponding function will take a lock on the mutex and call the original, non-thread-safe interface. This is how it is protected against the multiple threads.</p><p>Here is the example for the <code>progress_t</code> interface:</p><pre><code> // Wrap a non-thread-safe progress in a multi-thread-safe progress. // // The progress can only be reported by a per-thread-progress referencing // this multi-thread progress. struct multi_thread_progress_t { // Wrap a non-threas safe progress. multi_thread_progress_t() = default; multi_thread_progress_t(progress_t& a_non_thread_safe_progress) : my_non_thread_safe_progress(&a_non_thread_safe_progress), my_report_every(a_non_thread_safe_progress.my_report_every) {} // Report the final progress tally when destroyed. ~multi_thread_progress_t(); // Force to report the progress tally. void flush_progress() { report_to_non_thread_safe_progress(my_total_count_so_far); } // Clear the progress. void clear_progress() { my_total_count_so_far = 0; } protected: // Receive progress from a per-thread progress. (see below) void update_progress_from_thread(size_t a_count_from_thread); // Propagate the progress to the non-thread-safe progress. void report_to_non_thread_safe_progress(size_t a_count); private: progress_t* my_non_thread_safe_progress = nullptr; size_t my_report_every = 100 * 1000; std::atomic<size_t> my_total_count_so_far = 0; std::mutex my_mutex; friend struct per_thread_progress_t; };</size_t></code></pre><p>The important functions are: <code>update_progress_from_thread()</code> and <code>report_to_non_thread_safe_progress()</code>. The first one receives the progress from each per-thread progress implementation that will be shown later. It accumulates the total in a multi-thread-safe variable and only reports on it when it crosses a given threshold. The second function forwards the progress to the non-thread-safe implementation under the protection of a mutex. Here's the implementation for both:</p><pre><code> void multi_thread_progress_t::update_progress_from_thread(size_t a_count_from_thread) { if (!my_non_thread_safe_progress) return; const size_t pre_count = my_total_count_so_far.fetch_add(a_count_from_thread); const size_t post_count = pre_count + a_count_from_thread; if ((pre_count / my_report_every) != (post_count / my_report_every)) { report_to_non_thread_safe_progress(post_count); } } void multi_thread_progress_t::report_to_non_thread_safe_progress(size_t a_count) { std::lock_guard lock(my_mutex); my_non_thread_safe_progress->update_progress(a_count); }</code></pre><h3>Per-Thread Implementation</h3><p>The final part of the pattern is the thin per-thread implementation of the interface. In this case we do want to implement the interface. This will be what replaces the original, non-thread-safe implementation. Note that it doesn’t need to be thread-safe! It is meant to be used by a single thread and the multi-thread protection is done in the multi-thread protector that we have shown before.</p><p>This division of labor between the protector and the per-thread part greatly simplifies reasoning around the code and simplifies the code itself.</p><p>Here is the declaration of the per-thread progress in the example:</p><pre><code> // Report the progress of work from one thread to a multi-thread progress. // // Create one instance in each thread. It caches the thread progress and // only report from time to time to the multi-thread progress to avoid // accessing the shared atomic variable too often. struct per_thread_progress_t : progress_t { // Create a per-thread progress that report to the given multi-thread progress. per_thread_progress_t() = default; per_thread_progress_t(multi_thread_progress_t& a_mt_progress) : progress_t(a_mt_progress.my_report_every / 10), my_mt_progress(&a_mt_progress) {} per_thread_progress_t(const per_thread_progress_t& an_other) : progress_t(an_other), my_mt_progress(an_other.my_mt_progress) { clear_progress(); } per_thread_progress_t& operator=(const per_thread_progress_t& an_other) { progress_t::operator=(an_other); // Avoid copying the per-thread progress accumulated. clear_progress(); return *this; } // Report the final progress tally when destroyed. ~per_thread_progress_t(); protected: // Propagate the progress to the multi-thread progress. void update_progress(size_t a_total_count_so_far) override { if (!my_mt_progress) return; my_mt_progress->update_progress_from_thread(a_total_count_so_far); clear_progress(); } private: multi_thread_progress_t* my_mt_progress = nullptr; };</code></pre><h2>Conclusion</h2><p>I’ve used this pattern to solve multi-thread problems multiple times. It served me well. Feel free to reuse this design where you need it!</p><p>The particular example for the progress report interface is found in the “utility” library of the <i>Tantrix Solver</i> project <a href="https://github.com/pierrebai/Tantrix">available on GitHub</a>.</p></div>