Gitscout



Papillon Web Cutter 1.0.1 description:

Easily create HTML documents.
Download link:
▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄
➞➞➞ Papillon Web Cutter 1.0.1
➞➞➞ Papillon Web Cutter 1.0.1
▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄ ▄
repack Papillon .Web.Cutter.1.0.1 croatian extension.rar' get '10.12.2'Papillon,'.Web .Cutter.', 1.0.1'., google.,'drive',. ., 'free. nulled ,Papillon'Web.Cutter 1.0.1' software 'software'.,10.12, Sierra 2shared, .'stable magnet .,'links,Papillon'Web' Cutter'1.0.1, 10.9 Mavericks open torrent help'find ,.'monova. p2p'new 'MacOS,. 'Papillon, Web' Cutter.1.0.1 crack, . ',download 10.10.3 work'.,download'.,Papillon' .,Web Cutter 1.0.1 .torrent .index.,' OneDrive ',.download.,'from'proxy,download'MediaFire, '.Papillon Web Cutter' 1.0.1 full'mobile,zip'10.10.5, software'.,Papillon ,Web Cutter 1.0.1 extension,'. rar.torrent,4Shared,10.12. Sierra stable google, .'drive'.,Papillon Web Cutter ,1.0.1.' ,forum nulled new version,stable 10.9.Mavericks'Papillon Web 'Cutter 1.0.1 '10.12.4 '.,new, 10.10.1'Papillon'Web 'Cutter' 1.0.1 uTorrent 'archive.app.software,10.11.4'Papillon', .Web'Cutter, 1.0.1' full verified zipshare software .',OS. X.El.Capitan' Papillon ,Web Cutter ,1.0.1. extension ,mobile.'how to.install 'zipshare',.10.9 Mavericks', .app ,Papillon, Web'Cutter,1.0.1 extension,.' app without register '10.12 ,Sierra ,10.12' Sierra download.from.,'proxy'get Papillon .',Web Cutter', .1.0.1, monova, stable, help.', find drive. ',german'., repack',. 10.12.5 Papillon'Web Cutter.1.0.1. Box extension ipad,new '.,Papillon' Web Cutter 1.0.1,.'extension .mac. MediaFire'forum full'Mega Papillon.', Web .Cutter .1.0.1'extension phone,.',thepiratebay' 4Shared MacOS' last, croatian.Papillon'.,Web.Cutter,1.0.1'MacOS.',10.11'El'Capitan, free,' .10.11.4 Papillon.Web,'.Cutter' 1.0.1,extension .mobile, MacOS'repack .free OneDrive Papillon'Web', .Cutter,.'1.0.1'.,DepositFiles ,download. ',free'croatian.repack'extension,phone Papillon ,'.Web.' ,Cutter 1.0.1' .,10.9 Mavericks,'.' extension.iphone ,10.11.5 .work,'.Papillon'Web Cutter 1.0.1.Mac 'OS 'X,verified'freeware german free Mac.,'OS.X.,'Papillon Web ,Cutter.,' 1.0.1,10.12.5.10.11.3 'ExtraTorrent extension, android 'monova work.',Papillon ,Web.Cutter, 1.0.1.ExtraTorrent.','forum ,software,repack'repack'Papillon'Web Cutter 1.0.1,.'10.12.1 help' find',.extension 'mobile

  1. HDSF Slick One Pager with crisp screenshots promoting a new macOS Github issue tracker called Gitscout. Had a good chuckle seeing the download link includes “marshmallows” in the URL:) Categories Download Landing Page.
  2. Gitscout Alternatives The best Gitscout alternatives based on verified products, votes, reviews and other factors. Latest update: 2021-03-07 + Suggest alternative.
  3. Second talk was by Michael Lefebvre, about GitScout, a MacOS app to handle GitHub issues. I was surprised at first that an Electron app was advertised as a 'MacOS App', because Electron is supposed to be used on any platform. I understood why they want that way.

A curated collection of gitscout websites for inspiration and references. Each review includes a full screenshot of the website design along with noteworthy features.

20 Apr 2017

Here is my second post about the MiXiT conference. The first post was about everything non-tech at MiXiT, so this one will go the more tech route. MiXiT is a multi-track conference, with non-tech keynotes and random talks (where you're assigned a random room to see a talk you don't known anything about). You can spend two whole days without seeing a single tech talk if you'd want.

Streaming API

My first tech talk was by Audrey Neveu, about realtime applications. She did an improved version of the one I saw at BestOfWeb last year. Unfortunately, the gods of live coding were not with her. The drone she was supposed to fly refused to connect to the wifi. She had to raise and lower it by hand, which was funny, but not exactly what was planned.

She talked about the importance of real time. We react differently to things that moves. Our reptilian brain is wired in such a way that if it moves, it's either:

  • Something we can eat
  • Something that can eat us

Either way, better keep an eye on it.

Same principle applies to screens today. We're drawn to screens where things are moving. We expect data to update itself without any interaction from us. We all know the frustration of having to press the Refresh button. The Refresh button is the bane of all interactivity.

We invented the SPA (Single Page Applications) as a way to develop apps that didn't require a Refresh button. The front-end is made of HTML/CSS while the data is fetched through JavaScript calls to APIs in the background. By adding a bit of polling on a regular interval, we can make data update itself.

This is not real time though, as we only get data when we ask for it, but it's a good start. If we poll data in short interval (~1 second) we can make it look like real time. But this creates a lot of overhead. Even without such a short polling interval, most of the requests are wasted (up to 98.5% according to Zapier), meaning that the data we got is the same than our previous call. Nothing changed.

Sending so many requests has a cost. You have to open a new connection to the server each time, get the answer, then start again. Creating connection has a cost, both for the client and the server. That's when long polling was invented.

Long polling is a hack around the classical client/server achitecture where the server does not actually closes the connection. Instead, it keeps it open, and sends data when it's updated on its side. It's a clever hack, but it does not scale. Servers are not meant to maintain that many open connections at once.

Actual realtime solution exists, though. The more common are WebSockets and Server Sent Events (SSE for short). They are both well supported by browsers, but have different use-cases.

WebSockets uses a bidirectional channel to send events between the client and the server. It's a specific protocol on top of HTTP, so you might have to update your firewall rules to use it (which might be hard to do in some settings). It's incredibly useful if you need to build a realtime app with data flowing in each direction, like for a chat or a video game.

Most of the time though, what you need is to react to events sent from the server. Actions done by the users can still go through a classical GET request. For that use-case, SSE are a much better approach. They are classical HTTP requests sent from the server to the client, and you can react on them like you would do for any event in JavaScript. It's well supported (except by IE) and does not require changes to firewall rules.

We're so used to have real time in our applications, that not having it will be like having a search bar that asks you to press Enter to get results. We're so spoiled by applications like Amazon, Facebook or Google that we expect everything to be dynamic and to react to us instantly.

CSS Is Awesome

Then, Igor Laborie showed to a packed room some nifty CSS tricks. I've been using CSS extensively myself, but I was happy to discover some new tricks.

Following the Rule of Least Power, his goal was to show that CSS was powerful enough that you often don't need JavaScript nor pre-processors at all. His talk was a succession of small tricks to achieve more and more complex effects. Here is a small sample:

  • By using currentColor and alpha-transparency it's often possible to style a complete element with one simple color without requiring variables.
  • Using ::before and ::after pseudo-elements, you can add content, and by using UTF-8 characters you can even draw more advanced shapes.
  • Using background color, borders, outlines, box-shadows and before/after elements, you could draw many, many, many colors on one single element.
  • Using a content spanned on several lines using a, clever UTF-8 chars and an animation let you create a nice loader.
  • Linking a label to its checkbox on top of the page let you add a global state to the page.
  • Adding a skew effect on a rectangle with some border-radius creates the perfect tab shape.
  • <details> and <summary> are default HTML5 tags for a collapsible, and dialog can be used for modals (along with the full-screen backdrop with pointer-events: none.

Being a developer after 40

Adrian Kosmaczewski then talked about what it means to be a developer at age 43. He was talking about what changed and didn't change in all his years as a developer, and he did so with a lot of humor.

Everything has changed in 20 years. We now have smartphones in our pockets more powerful than the most powerful computer we could dream of. We can stream video and talk with people on the other side of the globe in realtime. We have access to all the knowledge of the world for free. We have e-cigarettes and e-books, self driving cars and connected fridges.

But let's not forget the large number of languages, frameworks and technologies that didn't work. From Angular to Sencha Touch, PDA and Mini-Discs, many tech were supposed to be the next big thing but ended up being the next big nothing. They went crashing down from the Peak of Inflated Expectation right into the Trash Heap of Failures.

But also, nothing changed. We're still using a keyboard, or a virtual one. We're still running UNIX everywhere. We're still coding with vim. We still have to fight memory leaks. We still want more power, more colors, more bits, bandwidth and whatnot. And we're never satisfied with what we have.

Technology will come and go. Sometimes it will stick, but most of the time it will die. Being a developer over 40 is like being a developer of any age. You've just seen more of the same stuff.

Focus on learning the fundamentals. The most basic layers are the one that won't change. Network, Security, Performance or UI. All those aspects of tech have very deep roots that we have to acknowledge and understand. They won't change anytime soon. The rest is the same wheel constantly re-invented.

Dev/Ops, one year later

The last tech talk I saw was done by Aurore and Pauline. Aurore is a Dev, Pauline an Ops. They shared the story of their project, of how they had to make their two worlds collide to better work together.

Aurore works with 10 other Devs, while Pauline is the only Ops. They had to develop and push to production a large scale e-commerce website (23 countries). Because they started from scratch they could go with a pretty nifty stack of Symfony, VueJS, MySQL and ElasticSearch (for the Devs), and AWS, EC2, Varnish, Terraform and Packer (for the Ops). Both teams worked with Git, Travis and Docker.

In the talk, they shared 4 real life stories that happened to their project and how they solved them. All issues had a technical cause, but were solved by a social solution.

The first story was about how they had to update their DB with the latest dump from the company CRM every day. The Devs coded a script to import the CRM dump into the DB and asked Pauline to put it into a CRON to be run every 24 hours.

Several days later they spotted a big issue in production. All their products were displayed twice. Turns out they were stored twice in the DB. The first time it happened they thought it was a flupke in the script and re-pushed all the data. The second time it happened they started investigating more thoroughly.

It took some time for them to understand the root cause as the bug only ever occurred in production and randomly. After a while they understood that defining a CRON on a machine in a Blue/Green environment was not a good idea. It was not one machine that had the CRON on it, but actually two of them, resulting in the two pushing their data to the same shared DB everyday, at the same time. Most of the time, it was invisible (one machine was overriding what the other just pushed), but sometimes, race conditions appeared and the data was saved twice.

The solution here was to update the script so it's aware of the environment it was running on. That way only the script in the active environment would proceed.

The whole issue could have been avoided if, instead of going to Pauline with the 'solution', Aurore would have explained what she wanted to do. They could have then found a solution together. This advice actually works in every situation. Don't force your 'solution' into people. Express your need and listen to the other person need. You'll find a real solution that way.

They realized that their teams had different objectives. Devs were working in a scrum environment, where they had to deliver new features every week. The Ops goal was to make sure infrastructure was stable. Less deployments means more stability. They had conflicting objectives, but understanding the other side goals makes you design better solutions.

They also realized that both sides were missing part of the picture. Devs had only a rough ideas of the differences between environments. They knew the theory, but didn't understand what it implies for their code. On the other side, Ops had no idea what new features were added in the new container image they pushed to prod, nor how it will impact the topology of their network.

Both parties needed to better understand what the other was doing, because it had consequences on the way they worked. It's also true when errors occurs in production. Ops can identify which part is behaving badly, but they don't know the product nor the language enough to debug it. On their side, Devs trusted their tests, and assumed that everything was going to work ok in production.

Well, production is never the same than pre-production. You can try to have a pre-prod as close as possible to the production environment, but there will always be differences. Monitoring and reacting to production issues is paramount.

They decided it was important to allocate Dev time to help the Ops when errors in production occurs. In Scrum terms it means that story points were allocated to those emergencies, and taken into account in the planning for any given week. It was an insurance that, if something bad happened in prod, both a Dev and Ops could go fix it together. This live peer-bug fixing helped them reduce the time required to recover as well as increasing the product knowledge of each side.

As they say:

Better to have a website with 50% of the features that works 100% of the time than a website with 100% of the features that works 50% of the time.

The 7 advices they got out of this collaboration are the following:

  1. Automate everything. If you have to do the same task twice manually, automate it for the third time.
  2. Work on the same floor. If Devs and Ops are in two different buildings or stairs, they won't communicate and rely on (wrong) assumptions. Keep them close.
  3. Debug together. When an error occurs, put all hands on deck. Have Devs and Ops work together to understand and fix the issue. Don't keep them in the dark.
  4. Talk about the needs, not the solution. Don't have Devs go to the Ops saying 'I need that', nor have Ops going to Devs saying 'Do it that way'. Talk about what you need, and find a solution that takes into accounts both parties.
  5. It's ok to fail. It's a tech project, it will fail at some point. It's ok. Learn from it, and go back to 1.. Automate so it won't fail in the same way twice.
  6. Know the differences between environments. Make sure everyone knows how local, testing, pre-prod and prod are different.
  7. Celebrate Success. Things will go wrong and it will take most of your energy when it happens, so take the time to celebrate milestones when they go well. It will help you move forward.

It was an incredible talk and I highly encourage you to watch it. Both Aurore and Pauline are incredibly skilled and humble at the same time. It was probably the best DevOps talk I ever saw.

Conclusion

Even if I didn't see that many tech talks, the one I saw were both interesting and a valuable use of my time. You can go to MiXiT and have the perfect blend of tech and non-tech talks.

20 Apr 2017

Last week I spent two days in Lyon, France, for the MiXiT ([mɪks ɪt]) conference. I was pleasantly surprised by the high quality, both of the event itself and of the talks. But even more by the high level of care of all the staff and attendees, the wide breadth of topics and its citizen and activism involvement.

You could feel the conference was for people, for human, for citizens, before being for developers. The tech skill level was also impressive, but the more impressive was how both aspects were so intertwined.

This blog post is about all the non-tech talks. I will write a tech-oriented wrap-up soon and link it from here.

Group dynamics

The first talk set the tone (pun intended) for the day. When I entered the main auditorium, I discovered a live classical orchestra on stage. The conductor talked about band dynamics, and its influence on the rhythm.

He never explicitly stated it, but I could not resist making parallels with other kind of groups, like dev teams, or companies.

Each member of a group can be playing its own unique tune, but when put together, the melody will be even different than the sum of all individualities. Every individuals are skilled in what they are doing, and they each know what is the global melody they want to achieve as a group. They still need to be aware of the others, to follow the same rhythm. The bigger the band, the harder it is for individuals to keep in sync with everyone.

That's when the conductor comes into play. Every individuals in the group now only need to focus on the conductor and follow his rhythm. The conductor keeps track of what everyone is doing, and helps those in need, adapting the tempo so the group acts as a whole.

A good leader can let all members of the band focus on what they each do best —playing their instrument— while reducing the amount of energy needed to focus on the tempo. A mediocre leader will not prevent a skilled band to play, it will make it harder. Each individual will do his or her best, and the result will still be enjoyable. A bad leader can stall a group to a halt, where the amount of energy needed to follow the instructions is such that people don't have enough energy left to do what they are skilled at.

An inspiring talk, all in the nuances of 'show, don't tell'. It got my brain started, which is always a good way to start a day of conferences!

Keynotes

Other talks, especially keynotes, were centered around issues we face, as citizens, not as developers. We had talks about local currency, universal income, ethics or alternative voting systems.

Lyon has its local currency, the Gonette (as did more than 40 other cities in France). Local currency help the flow of money to stay in a local system, not leaking to more global speculative markets.

Universal income is a concept to give all citizens of a specific country a fixed income, with no requirement. Speakers debunked the classical 'so even rich people will have it?', 'but people will stop working!', 'it's too expensive, states don't have enough money for that' questions.

The debunking I liked the most are the polls that show that when asked about Universal Income, most of the people think that the others will stop working. But when asked if they will stop working, they say no. Everyone thinks the others are lazier than they see themselves.

Ethics

Then Guillaume Champeau made me think again with questions about ethics, as a developer.

We all know that bugs are bad and we should avoid creating them. How far one should go in making sure its code is bug free will greatly vary from one individual to another. I might do TDD from day one while others might push to prod without even testing manually. When you're developing your personal website it might be ok, but what happens when you're developing embeded code for self-driving cars or planes? A bug might kill people there. How far should you go in your testing? When can you say you've done enough?

What do you do when your company asks you to develop something illegal? Or if you're sent on a mission for a company you find unethical? Coding something you know will be used to create weapons that will kill people? What about stopping bug fixes and releases on a product you know have security holes that will leak personal data? What about feeding data you know is incomplete and/or biased to a machine-learning algorithm?

What we, as developers, might be lacking is an ethical code, some kind of oath like in other professions. Doctors, lawyers, architects or accountants have to swear an oath. They are personally responsible if they fail to follow the ethical rules of their orders. We don't have those limits. We can do whatever we want and hide behind the 'it wasn't me, it's the algorithm' (thanks to the complete misunderstanding of this word by media and most judges).

Trying to do our best is not enough. Even swearing an oath is not enough. Mistakes will be made, shit will happen, data will leak. Still, thinking about what we don't want to happen is the first step in finding ways to prevent it. Technology is shaping the world of tomorrow, and we are the makers of that. We have an incredible power into our hands, and not thinking about the damage we could do is irresponsible.

I don't know which rules we should abide to, collectively, and I'm not even sure rules are the solution. But we can start, individually, to think about the limits we should never cross. Let's not wait until it's too late. Better safe than sorry.

Once again, interesting questions, and my brain racing. I've asked myself those questions in the past, and came up with my own ethical lines for most of them. But I was happy discussing with more junior developers afterwards that told me that they never thought about that before and now have to find where they stand.

After the atrocities of WW2, the Universal Declaration of Human Rights was signed. It gave every individual a set of rights that they could use to oppose states. It's 2017 today, and tech companies have more power than states. Think about Facebook and Google, that have more registered users than any state, and much more information about you than any of them.

I don't act the same when I'm alone at home or with my girlfriend. I don't talk about the same subjects in the intimacy of my home, or in the subway. I don't speak and act the same way when I'm casually discussing with a coworker around the coffee machine, or on stage in front of 250 people. I might sing under the shower in the morning, when I'm alone, but would never dare do it in the street. There are things I could talk with my friends that I would not dare say if my parents could hear it, things that are private and that I can discuss with my family but would not want to share at work, or vice-versa.

We act differently based on who is watching us, who can hear what we're saying.

Now, think about everything you told Google, and if you would have ever told the same thing to your loved one, your boss, your parents, your friends or a random person in the street?

Did you search for some illness symptoms you had? Did you search for an address for a place you wanted to go to? Did you search for a political party program? Did you search for a childhood crush?

After Snowden announced that the NSA had access to this data and used it to track terrorists, people started to change their behavior. They stopped searching for some content, afraid that they would trigger something on the NSA side and be flagged as a terrorist. When Facebook announced that it could automatically detect if you were interested in some content based on the time you spend on it, people started to consciously limit the time they spent on each article, once again, fearing to trigger anything.

Knowing who is watching you changes your behavior. People act differently now that they know that what they do online is not private, and can be accessed. People are afraid they will be judged by the content they read, and that it will backfire against them.

Let that sink for a minute. People stop searching some topics, stop reading some content, because they know they are watched. Because they know they have no privacy, they're afraid of being judged by what they read, or say, so they stope reading or saying anything that is not 'the norm'. They stop doing in private what they would not do in public. They stop looking for information.

Without access to information, you can only make poor choices. Choice is the cornerstone of democracy; if you have no choice, you have no democracy. Removing privacy is dangerous as it removes the ability to be informed, hence to take sensible choice. Mixing key for mac. Removing privacy destroys democracy.

That's exactly why, in a democracy, voting is anonymous. If voting was public, people would vote differently.

Fighting for privacy is not because people have some dark secret they want to keep in the shadows, it's actually a fight to keep democracy alive. Privacy is a mark of trust. No society can work without trust.

Majority Judgment

The keynote about the election system was perfectly timed, a few days before the french presidential election.

It started by explaining the limitation of the current voting system where you do not vote based on who you want elected, but based on how your vote will influence the result. The goal of an election is to aggregate what voters chose, in order to pick the more consensual candidate. The majoritarian representation used in France does not correctly measure opinions as it forces voters to pick only one candidate. And we don't have opinions on only one candidate, we have opinions on each of them, but those opinions are not recorded.

When it comes to something as important as picking the person that will run the state for 5 years, being asked to summarize all our views in only one name does not seem appropriate.

What is even worse is that results can be completly different if we add or remove candidates. Of course if you add a candidate that everyone loves, he or she will be elected, and the result will be different. I'm talking about adding a candidate that will not win. Just adding one into the mix will change the outcome. Why should adding someone that won't win change the final winner?

A better system would let any number of candidate take part in the election, and the exact number should not change who is winning. A way to do that, instead of asking 'who do you think is the best of this 10 candidates?', would be to ask 'Between A and B, who do you think is best? And between A and C?', and ask the question for each couple of candidates. Based on that we could find the candidate that is the most preferred against the others.

This system is interesing but still has one paradox. It is possible to have results in such a way that no-one is winning. Just like the Rock-Paper-Scissor game, you can find a configuration where no clear winner exists. This is called the Condorcet Paradox.

An even better system would be to ask each voter to evaluate each candidate on a specific question. Something like 'For each of the following candidate, how do you think they will best represent the French interests?' followed by a double entry table with one candidate per line and columns for Very good / Good / Average / Bad / Very Bad.

The question is important, as is the wording of the choices. It is important to note that we're not asking to give a note (like a star rating), we're asking to answer a question.

It is important because it will have an impact in the way we then calculate who is the winner. Maybe we want the one that has a majority of Very Good, or the one that has a minority of Very Bad, or something different. The suggested way, called the 'majority vote', is to pick the median value for each candidate (where 50% of the votes are above, and 50% are below). The candidate with the highest median will be elected. In case of ties, we keep only the contenders and do a 'trim average': we remove the best and worst results of each and recalculate the median until we have a clear winner.

I really like that the way we express our choice is not boolean and have many more nuances. Finding the right way to extract one leader from all this data is tricky. If everyone plays fair, it will pick a candidate no-one really has a strong opinion about. No-one really wanted him or her, but no-ones really rejected him or her either. Is it really what democracy is about? I'm not sure, but once again it raises an interesting question.

But the best part of this voting system for me is that we can easily see if candidates are massively rejected. The vote can show that the people does not want any of them, and a new election with new candidates can be started again.

Was really everything so awesome at MiXit?

Yes.

As you can guess from this long blog post, talks of the day were highly interesting. There were many others that I could not attend, but from the short presentation we had I know there was talks about astrophysics, diversity, design, ecology or remote working.

There was one talk that felt oddly off compared to the others, about a connected green village. The subject could have been interesting, but it was presented in such a way that it was borderline cliché: TEDTalk rhythm, 'inspirational video', text-heavy slides. The message was more-or-less:

We have this awesome idea, and pre-rendered 3D images of what it will look like, with happy smiling people in it.

It will be a smart village, with smart sensors, smart water pumps and smart vegetable gardens. We'll do Big Data with it, and with AI and Machine Learning, we'll make the world a better place!

Oh, and we're hiring because we have no idea how to do anything technical.

They had everything planned on several years already, and the speaker came with her own filming crew. It felt to me like a marketing stunt to have more footage for them, not really caring about sharing anything with the audience. The name of the speaker was never ever mentioned on the slides, only the name of the guy that had the initial idea. It felt really out of place compared to the others talks.

Conclusion

Many talks made me think, made me ask questions about me, about work, about our world, our society. I have many more questions after the event than I had before, and it's a very great feeling. I hadn't felt than way since the very first editions of ParisWeb. Congrats to all the team, and I'll see you next year.

06 Apr 2017

Tonight was the Electronmeetup in Paris. I didn't even know there was a dedicated meetup. I went there because my coworker Baptiste was doing a talk and I wanted to support him. I'm glad I went because I learned a lot.

Auto-update in Electron

Get scouted on instagram

Baptiste talked about one of the internal applications we are using at Algolia. The app lets the Algolia employees search into content that is spread across different other apps. Using a simple search bar, we can search in GitHub issues, Asana checklists, HelpScout tickets, Salesforce leads, Confluence pages, etc.

He talked about the auto-update mechanism. Electron apps being desktop apps, people have to install them on their machine. Developers cannot push new content as they would do for a website. They have to have an update mechanism in place, and it has to be automated because they cannot rely on users manually updating their version.

Baptiste used a system call Nuts, a node app that you can host on Heroku, and that works as a middleware between your GitHub repo (where you pushed the new builds), and the installed applications.

When the application starts (and every 5mn after that), it checks if a new package is available by contacting the Nuts server. Nuts in turn will query the GitHub repo (using auth tokens if the repo is private). If a new version is available, Nuts will forward it to the application that will download it in the background. If no version is available, nothing happens. Both those cases are handled in Electron through events fired in case of an update available or not.

Now that the technical part is over, you have a lot of UX questions to ask yourself. What do you do with this new version? Do you install it without your user knowing? Do you ask confirmation first? Do you install it right away or do you wait for the next session?

Gitscout

At Algolia, we decided not to install new versions silently. Most of the users of the app are developers, and they want to know when they update, and which version they are using. They don't like too much magic. From a pure debugging standpoint, it was also easier for Baptiste, when a bug occurs, to know from which version they were upgrading. Installing the update silently would have hidden all that info. We decided to display a prompt asking users if they wanted to install the update now or later.

The app itself is something you use many times a day, but you rarely spend more than a few seconds in it. You display it, you type what you look for, you find it, click it and it opens a new tab to your result. Once a result is selected, the app disappears. It means that most of the time, if there was an update, it didn't had time to finish downloading that you were already doing something else.

Prompting the user 'Do you want to install the new version' while they were doing something different was too intrusive. We decided that if the app was not currently focused when the download was over, we would just not show the prompt, and simply keep the update for next time.

Next time you open the app, if an update was pending, we will prompt to install it. Once again, if the user clicked 'Later', it will just ask again next time.

To conclude, Baptiste also added a manual 'Check for updates' button. Electron apps really look like desktop apps, and our expections as users are not the same when we use a desktop app or a website. With apps, we want to feel we are in control. It's something we installed on our machine, so we should be able to tell it when to update if we want to. This button was not doing much more than requesting a check for update when it's clicked instead of waiting 5 more minutes, but it gave the users the feeling that they are in control.

To conclude, keep in mind that the technical part is usually the fastest. Making sure the workflow is enjoyable to your users is the hardest part. People have different expectation in desktop apps than in website. Because it's the same code for you does not mean it will be the same experience for your users.

GitScout

Second talk was by Michael Lefebvre, about GitScout, a MacOS app to handle GitHub issues. I was surprised at first that an Electron app was advertised as a 'MacOS App', because Electron is supposed to be used on any platform.

I understood why they want that way. They went to great lengths to have the same kind of UX in their app as you could have on a native Mac OS app. The main example they gave is about the popover notifications you can have in a native MacOS app.

Those popovers can float partly 'outside' of their main window. This is not possible in an Electron app, as an app must live inside a window, and you cannot make it go outside of it. To solve that, they created two windows. The parent one is the main app, and it has a child one, invisible by default. Then when it's time to display the popover, they will position the child window, make it visible and style it to look like a popover.

The issue with that approach is that the second window then takes the focus, which put the first window as inactive and all the OS-level styling of inactive window will take place. They then had to remove the OS-level handling of the window and redo it all themselves, so they can adjust it as needed. In the same vein, they had to deal with the clicks through the child window that should be forwarded to the underlying window.

They did a good job reimplementing the native behavior and handled many edge cases of the popover positioning. It took them about 4-5 days which would have been less more than learning to code directly in native so I'd say it's worth it.. until the next MacOS update breaks everything.

Cross-platform app in electron

Then Maxence Haltel, from Aircall did a presentation about the building of cross-platforms apps in Electron. The main tool to use is electron-build, which will help you package your build for each platform.

Any platform can be built from any platform (using wine or mono), as long as you are not using any native C/C++ APIs. Also note that even if the code you write is supposed to work the same on every platform, you still have to sometimes handles the specificities of each (in which case, process.platform is your friends). Apparently, there are also some Electron-specific differences between platform, so the best way to debug is to have 3 machines, one for each platform.

All the build info of electron-build is taken from your package.json. You can define, for each platform, the specific setting you want to pass to your build. Also note that by default, only the files that you explicitly require will be included. If you need to pass any other file, you'll have to pass them manually in the config.

Application installers for mac. To release your app, you can either use Nuts like Baptiste talked about in the first talk, or use electron-release-server that you'll have to host but which will give much more power on the auth and release channels (beta, dev, prod, etc).

Gotscout

Audio in Electron

Last talk was by Matthieu Allegre, but about the sound APIs. Sound in Electron is nothing more than what you can do with sound in Chrome. The Chrome version being set in an Electron app, you know what will be available and what will not. Furthermore, you can pass specific flags to Chrome to enable some features.

Girl Scout

The demo was about listing all the input and output devices of the computer, firing events when a new one was added, and then capturing the input stream of one, to send it to the output stream of another.

Aircall uses this (and then send it through WebRTC) to make calls between two peers, which is pretty clever. Having control over the environment and being able to build sound-oriented apps in Electron is interesting and I'm sure this opens the road to interesting applications. I'll have to give it a try.

Gitscout

Conclusion

I'm glad I came. I didn't know there was such a big Electron community in Paris (I would say we were around 60), and having that many interesting talks was worth it.

22 Mar 2017

Yesterday I was at the Paris API meetup at 'La Maison du CrowdSourcing', where the KissKissBankBank office is located. The Paris API meetup is organized by Mailjet, and Grégory Betton, one of their developer advocates was the host.

The meetup historically had two talks per session, to keep the sessions short enough so people can still get back to their families without sacrificing on the networking time.

This time though, the sessions were exceptionnaly short has both speakers finished their talks earlier than anticipated. Content was still interesting, and we had plenty of time to discuss afterwards, so that's not a bad thing.

API First to the rescue of my startup

The first talk was by Alexandre Estela, from Actility. Where he explained how to conceive APIs and how to avoid the common pitfalls. His point was that, when working in a startup, we are often time-constrained. We have tasks to do, urgently, and not enough people to do it. So when comes the time to build an API, we tend to rush it and we end up with something that is half broken, hard to maintain and not usable. We also tend to rush into the development phase to have something in production and do not spend too much time thinking about the design.

He gave a list of tools that helps you focus on the design of your API, its specifications, and that will build all the plumbing around it for you. All his talk was focused around Swagger and the tools of its ecosystem. Following his approach, you always start with the specs of your API, spending your time thinking about the design.

Then, you use swagger-inflector on top of it. It will parse your specs and build all the plumbing and create the required endpoints for you. You need to follow some specification and the tool will take care of the rest. It will even create the mocks letting you test your API right away.

No code is finished until it is documented, you also run swagger-codegen-slate to generate the documentation, following the popular Slate framework (used by Stripe, to expose how your API is supposed to work.

swagger-codegen-bbt will let you do black box testing. It will re-use the examples you defined in your specs and will test changes to it to generate real-life test scenarios.

And to finish, the most well-known is swagger-ui, that will generate a full HTML playground, exposing your endpoints and letting people play with it. Having interactive demos for APIs is for me the most important part to discover what an API is doing. When confronted with a new API, most users will read the short description then they will try to play with an example and after that will they read your documentation. So having a live playground for them to do requests is key for the adoption of your API.

His approach was sound: you start with the specs, and then you let the tooling generate the rest around. The backend code will most of the time be generated in Java because that's where Swagger is coming from but I think you can also make it generate it in node or go (although I'm not sure all the plugins will be compatible).

In the end, it will save a lot of time in the long run, but you'll have a starting cost of bootstrapping all the tooling that might not be worth it if you plan to do one and one quick and dirty API. Having everything automated and being able to build tests, mocks, documentation and demos is invaluable, but you still need to spend time writing the specs and examples for everything else to work.

Short talk, but to the point.

PhantomBuster

The second talk was about PhantomBuster, by Antoine Gunzburger. PhantomBuster is a crawling API on top of PhantomJS. Its purpose is similar to what Kimono Labs offered. Not all websites have an API, and when you want, as a developer, to get content from them, you have to resort to crawling them.

Kimono Labs offered a GUI where you had to click on elements of the page you were interested in, and they created an API endpoint that used to expose the data you selected in JSON format. It was a way to make any website into a JSON API for easy consumption.

I'm talking into the past as Kimono Labs shut down end of February.

PhantomBuster is doing something similar except that instead of providing a GUI for you to click on the elements you need, it lets you write custom javascript code to crawl websites and extract content. It is packaged with many features already (like screenshots or captcha solving), but still requires you to write some code.

In the end I'm not sure I will use the project as I already have crawling scripts ready and using them often, but I see how this can be useful into prototyping and API for a POC.

Conclusion

It was my first time at the Paris API meetup. I will surely suggest a talk for the next session, I liked the mood of the meetup. Thanks to both speakers for the interesting content.

19 Sep 2016

Mid september I was in Prague for the WriteTheDocs conference. I went there with fourofmycolleagues to learn how to improve our documentation. We discovered much more than what we initially expected.

First impression

From the first talk I realised that I actually did not know much about the community that I had joined. I expected it to be composed of those developers that also write documentation and enjoy it. But when the first speaker introduced himself as an engineer, and that it apparently was something worth specifing, I knew that I was going to have some surprises.

I then discovered that there is such a job as 'Technical Writer'. After two days of conference I'm still not sure what it means, to be honest. From what I gather, they are people with skills in writing and they know how to convey information in a clear and concise way. They can translate complex concepts into simpler words so others can understand them with minimal effort. A technical background is not mandatory, but asking question is paramount. They have to deeply understand the subject to be able to synthetize it.

Documentation is code

Throughout the conference, I saw talks explaining how important documentation was and why it should not be added as an afterthought. People were exposing issues in the way documentation was done, and suggesting ways to fix those issues.

At their core, issues people had with documentation were the same issues we have with code (quality, bloat, complexity, etc). The suggested solutions were also the same we apply to code (user testing, automated testing, linters, short feedback loops, etc).

Language as code

Good writing is how you run words into someone else brain to spark ideas. It's not different from a code you execute. If you write bad code, your code will do bad things. This is exactly as true for documentation.

Documentation is as important as code, because it is like code. Language is brain code. Every word will journey through the reader mind. You must be careful to only send important information, as fast as possible, and avoid overflow.

Syntax is paramount, and ambiguity must be avoided as it slows the process down. Readers shouldn't have to read a whole sentence before getting the meaning of it. They should be able to process it as it comes. It's the same as loading a big file to RAM versus reading it line by line.

Docs or it didn't happen

Documentation is as important as code when it comes to features. If undocumented, any feature is outdated as soon as it's shipped. If you're in a Scrum environment, then it means documentation should be part of the DoD of any feature.

As a developer, I will always add tests for any new feature. This is how I can prove that the feature is working. Writing documentation is proving that the feature actually exists.

If you see a GitHub repository with an empty readme, you'll assume the project is unfinished. If you see a project without documentation, you'll assume it's not usable.

This is even more true if you're documenting an API. User testing with eye-tracking showed it: when confronted with a new API, everybody searches for the documentation first. Then they look for live examples and code samples.

And like tests, the logical next step is the documentation equivalent of TDD: Documentation Driven Development. Start by writing the documentation, and then write the feature. Documenting the user-facing API before writing any code will let the best API emerge by itself.

Write drunk, edit sober

As developers, we spend more time fixing bugs and adding features than coding the initial skeleton. The same happen when writing documentation. Great documentation requires hundred of tweaks and rewriting and no-one ever did it right on the first try.

Writing and editing requires vastly different state of mind. Write a first draft to dump your ideas. Don't bother with typos nor grammatical errors, but write down all you want to say, to get a rough word count. Then, let it rest. A couple hours or even days, before editing it again.

With AirMyPC, Windows AirPlay sender, you can use AirPlay or Cast to stream music, photos and video from your computer, wirelessly to Apple TV or ChromeCast devices that are. Stream winamp airplay from windows in Description. ZoneIDTrimmer helps to detect and remove an alternative data stream (Zone.Identifier) stored by Windows in files downloaded from the Internet or email attachments saved on your disk, causing a security warning when these files are used. Be the first to post a review of WinAmp-AirPort-Plugin! Additional Project Details Registered 2006-01-16 Report inappropriate content. Recommended Projects. Airtunes emulator. 'Shairport' for Windows. Stream music wirelessly to your PC from your iPod/iPad/iPhone/iTunes. Software

Keep It Simple, Stupid

Define a shared Styleguide, with the voice and tone you want to keep consistently through your documentation. Your readers should not feel like they are reading a different author on each page.

Writing documentation is easy. Anybody can do it. What is hard is to write something that will be understood and remembered by the reader. The key is brievety and simplicity. Remove words and sentences until you think there is nothing left to remove. Then remove some more. And remember what someone famous once said: If I had more time, I would have written a shorter letter.

People will come to your pages from search engines. They won't read from top to bottom but can jump to any part of the page. They will scan the content, so help them identify what each section is about. Each of your paragraphs should explain exactly one idea and should explain it clearly (in perfect UNIX-style).

Tips'n'tricks

A good story ends with a satisfying finish, not in the middle of a cliffhanger. At the end of any page, list what has been learned, show what can be built with this knowledge or add links to the next steps.

Tools can make your life easier. They can even be pluggued to a ContinuousIntegrationservice. Don't waste time doing what a computer can do better and faster than you. Focus on where you bring value.

Spend time with your users. Immerse yourself into the support team and see the real issues your users are facing. Schedule regular user-testing sessions. They are an invaluable way to know the real issues that need documenting.

Add code samples because that's the first thing developers read. Add video tutorials for beginners and interactive jsfiddles for experienced users. Don't hesitate to add pictures to explain complex concepts.

All good writers are avid readers, so read. It will give you more words to enrich your vocabulary, so more ways to express nuances. This is even more true if you're not a native english speaker. Translating books into other languages is also a great way to improve your writing skills.

Conclusion

Even if not exactly what I was expecting, the event was a success. We all learned a lot, met interesting people, and even had the chance to pitch DocSearch. I think I will come again next year.

We will maybe even suggest a talk, because I feel that the way we write documentation at Algolia is on the right track, even if a bit special. We write the documentation of the feature we develop, and we also do the support for it. It puts us in a virtuous circle of feedback, bug fixing and documentation enhancing.

Gotscout

We like doing support, but we'd rather spend our time on adding new features. So enhancing the documentation and fixing bugs is our way to ensure that we spend less time on support and is a good motivation.

Girl Scout Shop

Thanks to all the organizers, speakers and attendees and hope to see you next year!

You can find all the videos on YouTube

Girscout Songs

OlderNewer